title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
885
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
A Journey Into the Heart of Sports: Data Viz with Daren Willman
An Interview with Daren Willman, Director of Baseball Research & Development for Major League Baseball The following interview has been lightly edited for clarity. “SN” refers to Senthil Natarajan, and “DW” refers to Daren Willman. SN: Let’s get some quick context for our readers. Can you tell us a little bit about your background and experience? DW: Sure. I’m Daren Willman, the Director of Baseball Research & Development for Major League Baseball. My background is in computer science — focused primarily in software development, but most recently, I’ve been extremely interested in data visualization and how it can help sports teams, players, coaches, and front offices better understand the massive amounts of data available to them. Home Run Derby Visualization on Baseball Savant I played baseball all through college and always loved the game. After I graduated, I worked in law enforcement for almost 10 years, developing criminal history software. In 2012, I stumbled across a massive baseball dataset called PITCHf/x. In 2008 Major League Baseball had set up cameras in all 30 stadiums to track the metrics (velocity, release point, movement, pitch type) of every pitch thrown during a game. As soon as I saw the data there was definitely a watershed moment of all the cool things that could be done with it. I immediately started scraping all the data and coming up with ideas on how I would want to use it if I was a player so I started developing visuals that I might want to see. They started off fairly simple like a spray chart of where balls in play were hit and the location of where pitches were thrown. Shortly after that I started a website, Baseball Savant, and started adding the tools I was developing. I never really expected to work in sports but as the site got more popular opportunities started to come up with teams and the league. That’s when it started to become more of a reality that I could possibly end up making a move to sports full time. I spend a lot of time analyzing, developing applications to visualize the data and trying to figure out what we can do with it that baseball fans might enjoy. My role with MLB is pretty much a dream job for me. I get to blend two of my biggest passions, technology and baseball. Most of my time is spent working with the new player tracking system Statcast. Statcast tracks pretty much everything that happens on the baseball field. Player positioning, pitch tracking and hit tracking. Needless to say, this is a massive dataset, so I spend a lot of time analyzing, developing applications to visualize the data and trying to figure out what we can do with it that baseball fans might enjoy. SN: So, you post a lot of those visuals on Twitter, and they’re really popular! What’s been some of your favorite public visualizations that you’ve done? DW: Here are some of my favorite data viz tweets of recent memory… Every Baseball Stadium Drawn Over Each Other Every Baseball Stadium Drawn Over Each Other Every Home Run Hit This Season A First-Person View of a Dinger Every MLB Team’s Game-by-Game Run Differential Pitch Distribution Animations SN: How do these viz, the ones you post publicly, differ from the ones you create for your job? What is that process like — accounting for the types of stakeholders you have now and the different conditions of satisfaction you deal with when it’s something bigger than a hobby or personal project? DW: The public visuals I do don’t differ all that much from the ones I create at MLB. I’m given a lot of freedom to decide and iterate on what I find interesting. I like to share the progress of things I’m working on to the baseball community as I work on them and tend to get pretty good feedback from social media. It can be toxic at times because dataviz is similar to art. Some people will really like the visuals, but some people will hate them. I think one of the more challenging aspects of data visualization right now is how hard it can be to develop them. There’s a stigma with data visualization that it’s easy. In order to be really good at data visualization you need to be pretty good at several different disciplines. Scraping or wrangling data, organizing it, designing the visual and actually writing the code to display it are all unique challenges. There’s a stigma with data visualization that it’s easy. SN: I really like a few of the concepts you just brought up there, so let’s drill down a little bit more on that. The first is the idea of leveraging a community to help move your work forward. What have you found is the best way to interact with a community or some best practice tips on putting your work out there to get feedback from that community? What’s the best way to make use of such a broad range of feedback? Was it ever kind of intimidating to just put your visuals out there in public when you first started? One of the topics we’ve discussed among the DVS members is how tough it can be for people to sometimes allow themselves to open up their work to a larger audience, for a broad range of reasons (but also the importance of doing so!). What are your thoughts on this? DW: My typical process when working on new visuals goes something like this: after I’ve designed and collected the data for my idea, I’ll mock something up at blockbuilder.org using d3 (I do about 80% of my work with d3, the rest of my work is typically done in three.js). It’s a great tool that Ian Johnson (@enjalot) made to rapidly develop in d3. After I think it’s polished enough for a first iteration, I’ll tweet it out to a broader audience and see what the community thinks. The feedback I receive can vary but there are often times I don’t consider certain things when creating visuals, typically color choices that I might have made that don’t work well with color blind people. However, people will ask, “Why didn’t you use X chart instead?” which helps me think of the problem in a different light. It can definitely be intimidating putting yourself out there, especially on a platform such as Twitter that can be toxic, but the more you do it, the more you learn to ignore the people who aren’t giving you good feedback. Max Scherzer Pitch Types by Location — Pie Charts! A perfect example of feedback using Twitter is here. I created a scatter pie chart using a concept I saw Elijah Meeks mentioning and, as always, the anti-pie chart crowd came out. I think this is an exceptional way to visualize every pitch a pitcher has thrown since we’re dealing with larger amounts of data. A typical scatter plot would be way too much for this but scatter pies are perfect because they allow the data to be binned to the location and show which types of pitches, how many, and the general location — but someone will always gripe since it’s a pie chart. SN: I am one of the people who gripes about a pie chart! But I will agree with you that I can’t really think of a better way to bin the data appropriately for that specific purpose. This brings up an interesting point, which is designing for accuracy/precision versus impact. I recently wrote a little bit about how graphs with radial form factors really tend to bring out this tension. How do you straddle that line and balance those considerations? And have you found the arena of sports, which tends to be sometimes a very traditionalist space with a lot of inertia, to be more or less difficult to get buy-in on new or more niche/unique types of dataviz? DW: I think it really just depends on the circumstances. You need to know that you’ll never make everyone happy. Going back to what we were discussing earlier about dataviz being similar to art, artists can’t expect everyone to love their work. I think as long as you make a conscious effort to ensure the numbers or data you are trying to convey are shown clearly and not in a deceptive way, then using something like a radial, pie chart, etc. are fine. Another visual that seems to bring out this tension is radar charts but I think when those are done properly, they can in fact be very insightful [author’s note: score 1 for #TeamRadar]. People seem very receptive to new ways to view data in sports. I think sports is actually great when it comes to new data viz. From my experience, people seem very receptive to new ways to view data in sports. Sports, in general, has so many statistics and data now that it’s perfect for creating and developing new types of ways for fans to look at it. I think the younger generation of sports fans especially enjoys good ways of viewing the sports that they like. I’ve definitely found that the visuals that do exceptionally well on social media are ones which are quickly digestible and convey a clear message. Pitcher Visualization from a Catcher’s Perspective on Baseball Savant SN: As part of the dataviz process, you mentioned “designing” the visual. Can you elaborate a little bit on your design thinking process? How does an idea take shape? And how does that idea become reality? DW: My design process isn’t very scientific. Typically, I’ll come across a really cool dataviz and think to myself how can I do something similar with the data I have. I really like going to Andy Kirk’s website. He writes a “Best of” blog every month, and I’ll go there for some inspiration when I’m in the mood to create something. Also, with the recent addition of the DVS there’s so many awesome ideas flying around, it’s an awesome place to get inspired. It’s great practice to see something you like and re-create it using something you’re interested in from scratch. SN: Let’s continue that train of thought on the dataviz process. Being from a data analytics background myself, I keep going back to your point that dataviz is a very complex, multi-faceted process. I always find in data science projects that I spend a very large proportion of the time on just finding and preparing data, so it’s encouraging to hear this from another perspective as well. Can you go a little bit into how people can attune themselves to that mentality of developing a well-rounded skillset in order to be good dataviz practitioners? How did you develop that multi-disciplinary, end-to-end approach? Does the data step get any easier now that you’ve got the full force of the MLB data infrastructure behind you? DW: I was fortunate to come from a more data-centric background. I have been dealing with relatively large datasets since I graduated college, so wrangling data came pretty natural to me when I began dealing with visualizations. I think keeping focus on what I’m trying to do has always helped me. If you have an idea of visualization you want to create, it always helps me to think of exactly what data I need to create that idea and how can I get the data. Typically for my visualizations, all the data I need is in a SQL database so I can query it and extract it pretty easily, however, sometimes I have to write a quick script to scrape a site or an internal json feed. There are so many toolsets out there now to scrape data; it’s gotten pretty easy to do. I actually find the data wrangling process to be almost therapeutic. It’s like trying to solve a puzzle. Also, when just starting out with dataviz, it’s very easy to get discouraged. The process can be tedious and difficult, but there are so many skills you learn end-to-end that even if the project doesn’t turn out exactly the way you want, it’s great practice for the next time, and every time it gets a little bit easier. I think grabbing the data I need has gotten a bit easier since I’ve joined MLB. My colleagues and I have spent many hours developing an internal data warehouse specifically for grabbing data in an easy fashion. There are always tricky situations writing queries when dealing with rolling windows and condensing millions of data points down to a concise dataviz, but that’s part of the fun. DW: Yes, I noticed animations tend to do better on social media. I think by adding animations to certain visualizations, it helps catch the eye and exaggerate certain points I’m trying to tell. Having a static scatter plot of a pitcher’s strikeouts can help paint a picture, but when you show the same scatterplot point by point sequentially in an animation, it really helps drive home the fact that the player has a whole bunch of strike outs, base hits, or whatever metric I’m trying to convey. Also, I think transitions just look cool and are fun to play with; I use them a lot to test out new ideas I’ve been thinking about. SN: Let’s talk about that for a second, the concept of how to really drive home an idea. You’ve obviously talked about animation here, but what are some other techniques or ways that you experiment with or utilize to optimize your data viz? Colors, reframed perspectives, etc.… DW: That’s definitely a great question. Animation certainly helps draw the attention in especially on a medium such as Twitter but using color is a great way to highlight certain points on a visual. Recently, I’ve been experimenting a bit more with opacities to highlight certain points of animations and then slowly fading out the colors as the next bit of data is flashed on the visual. I still consider myself a novice when it comes to most visualization practices and I’m always researching and looking for inspiration on this very topic. I’m extremely thankful for the dataviz community because that’s where so many of my ideas and experiments initially come from.
https://medium.com/nightingale/a-journey-into-the-heart-of-sports-data-viz-with-daren-willman-ce429da3f1dc
['Senthil Natarajan']
2019-08-05 16:58:54.735000+00:00
['Data Visualization', 'Sports', 'Sports Journalists', 'Design', 'Sportsviz']
臉書廣告 2017 廣告尺寸指南
Co-Founder of Chigyosha Marketing Copr. based in Taipei. Addicted in data mining and content marketing. Passionated by visions and dreams. Follow
https://medium.com/chigyosha/%E8%87%89%E6%9B%B8%E5%BB%A3%E5%91%8A-2017-%E5%BB%A3%E5%91%8A%E5%B0%BA%E5%AF%B8%E6%8C%87%E5%8D%97-6c41009fd5f1
['Henry Hu']
2019-03-03 01:53:33.360000+00:00
['Facebook', 'Digital Advertising', 'Facebook Ads', 'Editors Pick', 'Digital Marketing']
Deploying Apache Web Server on AWS Instance through Ansible
To perform the above scenario, let’s move ahead step by step. So let’s see. Create an IAM Account Create an IAM Account with programmatic access to get the access and secret key so that we can access the AWS services. So you have to click next and next with proper access, your IAM account will be ready after a mintue but if you are no comfortable with this process then simply visit the below-mentioned article. Configuration of the Controller node After creating IAM Account, now you have to install ansible on your system that’s known as a controller node. So if your controller node isn’t configured then you can visit the below-mentioned article where you will find lots of information about the installation of Ansible. Create a Vault Ansible Vault is a feature of ansible that allows you to keep sensitive data such as passwords or keys in encrypted files, rather than as plaintext in playbooks or roles. Alternately, you may specify the location of a password file or command Ansible to always prompt for the password in your ansible. cfg file. So simply run the below command and put your access and secret key inside it which you download from IAM Account and then save it with your suitable password. ansible-vault create vault_file_name.yml Install boto library As you can see in the above figure. That we can use our localhost IP address to behave as a managed node and we will use the SDK to launch the ec2 instance on AWS as Ansible is built on python language so we will be using boto. Boto as it an API so it has the capability to contact AWS. sudo pip3 install boto3 Write a playbook for Instance After installing the boto library, you can provision the EC2 instance through the Ansible playbook. So just create the playbook. After creating the playbook, run it with your vault using the below command, and give te ansible-playbook --ask-vault-password playbook_name.yml So after running the playbook, let’s see your EC2 Dashboard and check the instance is provisioned or not. and write your private IP somewhere which you will get after running a playbook. Copy a Local File to a Remote System with the scp Command To copy a file from a local to a remote system run the following command: scp -i key_name.pem remote_username@10.10.0.2:/remote/directory Where key.pem is the name of the private key we want to copy, remote_username is the user on the remote server, 10.10.0.2 is the server IP address. The /remote/directory is the path to the directory you want to copy the file to. If you don’t specify a remote directory, the file will be copied to the remote user home directory Give user login permission to managed node through Ansible. cfg Now if you want to login to your EC2 instance dynamically, So write the below code in your ansible.cfg file. Because by default, login root has disabled. So you can’t log in with the root account. So go inside the below-mentioned location of the controller node and write the below syntax. sudo vi /etc/ansible/ansible.cfg Make an inventory and write the private IP of Instance Now your config file has been configured in the controller node so you can do anything on your instance (managed node) whatever you want to do so write the private IP which you got when you run the playbook for provisioning the Instance. And check that your managed node is available or not using the below command. ansible all --list-hosts Check your EC2 Instance (Managed node) is pingable or not. ansible inventory_group -m ping After checking all these things, create the role to deploy the webserver in the Managed node. Create a Role for deploying the Webserver Roles let you automatically load related vars_files, tasks, handlers, and other Ansible artifacts based on a known file structure. Once you group your content in roles, you can easily reuse them and share them with other users. Role directory structure. Storing and finding roles. So use the below command for creating the role. ansible-galaxy init role_name After running the above command, go inside your role and open the tasks folder, and write the below task inside the main.yml file. after writing the tasks, open your files folder which is also inside your role, so put the webpage code here. after completing these things, write the playbook to deploy the webserver with the role. and run your playbook with the below command. ansible-playbook playbook_name.yml After running the playbook, take the public IP of your EC2 instance (Managed node) and browse your webpage which you deployed through the playbook. So the webpage is also working well and looking properly. Conclusion Here, you have learned here about how to provision the EC2 instance and deploy the webpage with the role dynamically. So hopefully, you learned something from here and enjoyed this article. I tried to explain as much as possible. Hope You learned Something from here. Feel free to check out my LinkedIn profile mentioned below and obviously feel free to comment. I write Cloud Computing, Machine Learning, Bigdata, DevOps and Web, etc. blogs so feel free to follow me on Medium. Thanks, Everyone for reading. That’s all… Signing Off… 😊
https://medium.com/hackcoderr/deploying-apache-web-server-on-aws-instance-through-ansible-330b066015a2
['Sachin Kumar']
2020-12-25 10:06:00.354000+00:00
['Rolê', 'Apache', 'Ansible', 'AWS', 'Vault']
Case Study 01: Making Visualizations Resonate with How Researchers Think
Visual Solution This view is simpler, it culls unnecessary detail from the representation in order to represent the answer to precisely what the researcher wants to understand. This has a downstream affect on the researcher’s workflow: the visualization more clearly tracks or resonates with how the researchers think about the sample and its formation history. In the previous visualization, researchers spent much of their cognitive effort on decoding the color blocks that they were examining. In this visualization, on each card the researcher can see precisely the location and distribution of each element or mineral. Design Decisions The interface that this visualization lives in is black with light type and bright visualizations. I took this from radiology interfaces that I have worked on in the past. Radiologists are visual pattern analysts who see subtle and intricate differences in shades of gray. These shades represent the evidence of disease in the human body and a radiologist is significantly responsible for how a disease will be treated, they must see carefully all of the tiny subtle differences in color from an x-ray or MRI, the patient’s health depends on this. In order to support this careful seeing, radiologic interfaces are dark, which removes excess light that would otherwise constrict the pupil. With their pupils open wide to catch all of the detail, the bright visualizations become the focus of their analytic attention. The UI for this visual tool takes a similar approach because the PIXL scientists are doing something very similar, grayscale investigation of subtle patterns which represent a complex story to be discovered. Another critical decision was to remove the color from each element map. This color originally played the role of helping the scientists to distinguish each element in opacity overlays. Additionally the team developed a common chromatic language around each element: for example all scientists on the team agreed to represent iron with red, this helps in team cohesion and consensus building. Since we removed the opacity overlay technique, each element map could be reduced to grayscale, which is a more perceptually consistent representation. Because of the contingencies of human eyes and visual cortex perception, a dark blue pixel will appear darker than a yellow pixel even if they are at the same light value. By removing color from the map, we eliminated this challenge from the scientists daily workflow. In order to preserve the team chromatic language, each map card has a single color dot next to the element letters, allowing the team to maintain their same cognitive connection of element to color. One further challenge is the difficulty recognizing features across element maps. To overcome this, we only allow horizontal spatial comparison of maps and intersections. This way all distinct features will be at the same Y height in space, supporting side-to-side scanning tasks. To further clarify features, we created a feature trace method which allows the scientist to trace a line across the feature in the context image. Once created, this line is propagated automatically across all element maps and intersections, solidifying the exact location of the same feature across all maps. My Role The primary transformation in this case study occurred when my team was able to understand the underlying scientific need or inquiry. The science team, partially because of their expert status, was unable to see that their thinking had been sculpted by the contingencies of their ecosystem of non-specialized tools. In identifying and helping the team to articulate precisely the line of inquiry that they wished to open, we were able to reveal a deeper need to fulfill through our visualization. In this case, the final form of the visualization is less important in and of itself, but reveals a novel form of inquiry, one which allows the scientists to focus on comprehending the sample as opposed to decoding imagery.
https://medium.com/thesis-modules/case-study-01-geochemical-visualization-933594e88c3
['Adrian Galvin']
2019-03-28 04:32:48.694000+00:00
['Space', 'Mars', 'Visualization', 'Data Visualization', 'Mars Rovers']
My Trials and Tribulations as an Entrepreneur
Photo by BBH Singapore on Unsplash If you asked me what being an entrepreneur is like I would have to say, “It’s hard. Plain and simple, being in business for yourself takes effort, a lot of time and it’s can be lonely.” On the outside, it’s portrayed as all rainbows and unicorns. On the inside, it’s exhausting and, at times, discouraging. “Will I ever be successful? Do I have what it takes? Can my adrenals survive this?” are just a few of the questions that take up my headspace. Let’s be honest, most of us are drawn into becoming an entrepreneur under the guise of the freedom it will afford us. Entrepreneurs are driven. It’s the double-edged sword that lends to no boundaries between our personal and business lives and before you know it, we’re working all the time. Been there, doing that. With a few years behind me now, I see what’s really been going on through this entrepreneur journey; my business has been a chrysalis of healing for me. I fled the corporate world of money and security to follow my dream and find my purpose. Except I didn’t really know what that was. I just knew on a deeper level to take one step at a time towards my interests and passions and the journey back to me began. I believed I had to let go of the money to do meaningful work; that it was an “either/or” choice. Interestingly, the constant thread in both worlds was my need to strive and work hard. The common denominator was me and the deeply rooted belief that my success was defined by my external world. Photo by Samuel Zeller on Unsplash Funny enough, I recreated the same situation except I was no longer making a lot of money! I thought I had to choose one over the other and now see how my lack and scarcity mindset of climbing the corporate ladder transferred over to battling it out in the trenches of entrepreneurship. My goal as an entrepreneur was to create an abundant life with my business. I believed that if I led a spiritual life, had meaningful work that gave back to others and set up the right systems and processes, that I would make money while I slept which would allow me the freedom and balance to do whatever I wanted to do. Except it hasn’t worked that way and it has almost killed me trying to get “there”. Fast forward to today, I see clearly that lack and scarcity thinking and an abundant mindset are nothing more than different sides of the same coin; fueling the “either/or” mentality. Ah, that’s the missing link! I haven’t been living from my heart. I haven’t been living a life in alignment with what I truly valued. Instead of spending time with those I love, I’ve been taking myself away from them. Now begins the big work of sifting through what is important to me so I can powerfully create my life one empowered decision at a time. Success isn’t defined outside of ourselves; it comes from within ourselves. I am now choosing to live in a “Both/and” reality versus and “Either/or” and asking myself what do I value and leading my business and life from there. There IS middle ground between lack and scarcity and abundance and it’s called enough; enough love, enough money, enough time, enough doing and all the excess is just fluff. What is enough to me however, may not be enough for you. That’s why each one of us needs to go inward and connect with our hearts to see what we truly value. Photo by Felicia Buitenwerf on Unsplash Living between the lack and scarcity and abundant thinking creates a striving energy that reinforces the societal belief that we are not enough. This striving shows up when we are going against the current. Yet, when we decide what is enough in each area of our life, it creates an allowing energy which puts us in the flow and life unfolds with grace and ease. I never imagined that this whole journey of being an entrepreneur was really all about a journey back to me. It’s not what I expected or planned, it just happened. And like the caterpillar that’s in the cocoon, it cannot comprehend the possibility of becoming a butterfly; it’s totally out of its realm of possibility and yet, it just happens. All the trials of entrepreneurship have led me to my wounds and as a result, I’ve uncovered the gifts that they hold. It’s been a wild ride that’s for sure. Recognizing that my business and myself are really one and the same. Doing the inner work IS hard. It DOES take effort. And…a big AND here, the lessons ARE worth it. I’m choosing to relax into who I am and trust the process. I am choosing to strive less and be more. I am allowing…and I’m grateful for it. Leigh Ann makes her home with her family in the Canadian Rocky Mountains where they enjoy the outdoors, community and small town living. Best known as a Transformational Leader and Intuitive Change-maker, Leigh Ann is on a mission to up-level women’s lives through feminine leadership and entrepreneurship. With a unique blend of credentials in Business, Feng Shui and Life Coaching, Leigh Ann inspires women with new skills, new thinking and new hope to transform their lives to become powerful creators of a life they love. If you are ready to take the next step towards a life you love, connect with Leigh Ann for a free 40-minute clarity call by clicking here. You are worth it!
https://medium.com/nexus-generation/my-trials-and-tribulations-as-an-entrepreneur-f17aea63b61
['Leigh Ann Betts']
2019-07-09 21:43:47.959000+00:00
['Life Lessons', 'Self Improvement', 'Business', 'Entrepreneurship', 'Abundance']
5 Innovative AI Software Companies You Should Know
With AI often thrown around as a buzzword in business circles, people often forget that machine learning is a means to an end, rather than an end in itself. For most companies, building an AI is not your true goal. Instead, AI implementation can provide you with the tools to meet your goals, be it better customer service through an intuitive chatbot or streamlining video production through synthetic voiceovers. To help shed light on some real-world applications of machine learning, this article introduces five innovative AI software that you should keep on eye on throughout 2020. 1. Scanta Scanta is an AI startup with a very interesting history. The company started off creating augmented reality games and AR software for social media. You can get a glimpse of some of their technology in this episode of Expedition Unknown on the Discovery Channel: Despite their interesting projects in AR, the company has made a huge pivot into a seemingly untapped sector of the AI industry: advanced security for chatbot virtual assistants. Founded by Chaitanya Hiremath in 2016, the company operates out of San Francisco, with additional offices in India. Scanta’s Chatbot Security Software: VA Shield Today, Scanta’s main service is the protection of virtual assistant chatbots against machine learning attacks. Their solution, known as VA Shield, “analyzes requests, responses and conversations to and from the system to provide a new layer of supervision.” Scanta’s security services differ from the safety provided by conventional IT security teams. Often, teams are either unprepared to deal with machine learning attacks or are unaware that such vulnerabilities exist and therefore aren’t tracking them at all. However, vulnerabilities in chatbots do exist. One famous example is the lawsuit by Delta Airlines in 2019. Delta sued its chatbot developer for a large data breach due to vulnerabilities in their chatbot system which resulted in a leak of confidential user data and credit card information. As more and more companies begin adopting chatbots as part of their customer service infrastructure, it is possible that these attacks will become more commonplace. Scanta is positioning itself to become an industry leader in Chatbot security services, but also plans to extend its reach to provide security for other machine learning technologies. 2. Descript Descript is a software company that develops products for content creators. Founded in 2017 by Andrew Mason (co-founder of Groupon), the company operates out of San Francisco. Descript’s Synthetic Voice AI Software Descript’s main product is their video and audio editing software made for podcasters and video content creators. However, in 2019 the company acquired Lyrebird, an AI startup from Montreal, Canada. Lyrebird now operates as Descript’s AI research team, which is working on automated speech-to-text and synthetic voice technology. Synthetic voice technology is a niche sector of the AI industry. However, synthetic voices show a huge potential to improve video game development and film-making. Lyrebird is one of the first companies to dive into synthetic voice development and have thus garnered attention from media outlets such as Wired and Techcrunch. 3. Replica Replica is an AI startup that also develops synthetic voice technology. Founded in December 2017 by Shreyas Nivas, Riccardo Grinover, and Keni Mardira, the company operates out of Australia and the United States. In a 2019 interview, CEO Shreyas Nivas said that they were building “a marketplace for the world’s voices” where voice actors and regular people could license their voices to be used in video games, commercials, television programs, and any other form of media that requires voiceovers. Replica Studios Synthetic Voice Software Replica Studios is an industry-leading platform that allows game developers, video content creators, and the general public to create and train their own synthetic voices. From video game development to narration of television programs, there are many applications for synthetic voice technology. One of the most interesting and beneficial use cases may be the creation of a synthetic voice for people with health conditions, such as ALS, which cause them to lose the ability to speak. By recording their voice before they lose their speaking ability, we can create a synthetic copy of their voice to be used in speech aid devices. With huge improvements in the Replica Studio platform and the release of the Replica speech generation API, this is definitely one company to keep your eye on. 4. Clearview One of the most infamous names in the AI industry today, Clearview is a company that provides a reverse face image search solution for law enforcement. Using a state-of-the-art facial recognition algorithm, Clearview scans the face image of a target subject and then scours the internet for all publicly available images that may match the facial features present in the target image. Founded in 2017 by Hoan Ton-That and Richard Schwartz, the company operates out of New York City. Clearview’s Facial Recognition AI Software Clearview houses a large facial image database comprised of images that are publicly available on the web through social media, blogs, and other websites. The ultimate goal of Clearview is to provide law enforcement with powerful tools to catch criminals. However, many people have concerns about how the existence of such technology will affect privacy laws. In fact, the New York Times did an exposé about Clearview and claimed that the company might “end privacy as we know it.” Luckily, laws in the GDPR give individuals control of their own data, allowing users to request their profile from Clearview, should they wish to see it. Hopefully, more countries will follow suit and enact private data laws similar to the GDPR. Clearview is likely going to be at the forefront of the debate around the ethics of facial recognition for the foreseeable future. Any resulting regulations or lack of regulations against the company will set a precedent for other facial recognition developers and startups in the future. 5. Lionbridge AI Lionbridge is a global AI training data provider and data collection company. The company leverages over 50 offices worldwide and a community of over 1 million contributors to create training data at scale. Founded in 1996, Lionbridge began as a language services provider and bolstered its expansion into the machine learning industry by acquiring Gengo AI in January of 2019. Lionbridge AI’s Data Annotation Software The company recently announced the release of the Lionbridge AI Platform, standalone software for image, video, audio and text annotation. Using the platform, data science teams can upload their data, invite other team members, and annotate their datasets together through a collaborative effort. Teams can also track progress and output from individual contributors. Hailed by Forbes as one of America’s largest employers, such an influential company expanding its reach in the AI market is something to take note of. If the company’s expansion into the AI industry is able to replicate their success in translation and localization, their data annotation software could help data science teams both large and small get access to high-quality training data.
https://medium.com/datadriveninvestor/5-innovative-ai-software-companies-you-should-know-9c967cfc3e90
['Limarc Ambalina']
2020-07-30 20:04:04.964000+00:00
['Machine Learning', 'Ai Software', 'Tech', 'AI', 'Software']
Improve App Engine Startup Times through Warmup Requests
Improve App Engine Startup Times through Warmup Requests Season of Scale Season of Scale “Season of Scale” is a blog and video series to help enterprises and developers build scale and resilience into your design patterns. In this series we plan on walking you through some patterns and practices for creating apps that are resilient and scalable, two essential goals of many modern architecture exercises. In Season 2, we’re covering how to optimize your applications to improve instance startup time! If you haven’t seen Season 1, check it out here. How to improve Compute Engine startup times How to improve App Engine startup times (this article) How to improve Cloud Run startup times When it comes to gaming, whether its action, RPG, or simulation, staying in the action means being online. But in the offline world, you rep your fandom with merch. Will Critter Junction’s e-commerce shop hosted on App Engine be able to handle their hype? Read on. Check out the video Review In the last article, we helped Critter Junction investigate their Compute Engine instances to identify whether latency was pinpointed to request, provision, or boot times. We also helped them utilize custom images to reduce boot times. Now their eyes are on their App Engine instances. Without any in-person conventions this year, Critter Junction players from around the world have begun to flood their site to purchase character cards, apparel, and other swag. Critter Junction has been testing App Engine Standard to run their new merchandise shop because App Engine autoscales their application across multiple instances to meet the demands of additional traffic. Pending latency During load testing, they used Cloud Trace to measure response latency, and noticed a higher than usual latency when they used heavier concurrent requests to their service’s HTTP endpoint. Pending latency is how long a request can be sitting in the queue before App Engine decides to spin up another instance. If all of your app instances are busy when a request arrives, the request will wait in a queue to be handled by the next available instance. As the load increases, this means requests are processed more slowly. Pending latency App Engine will then start a new instance to help based on limits you set like CPU utilization, throughput, and max concurrent requests of the current running instances. But what they didn’t know is App Engine also needs to load their app’s code into a fresh instance when: They redeploy a new version of their app And when maintenance and repairs of underlying infrastructure or physical hardware occur. Though cold starts on App Engine are rare, the first request, or loading request sent to a new instance can take longer to be processed because the instance first has to load your app’s code, including any libraries and resources needed to handle the request. This means a gradual increase in response times to handle new traffic. Warmup Requests What you want instead is to get the initialization to happen sooner before a new instance serves live traffic. You can do this by issuing a warmup request, which loads application code into an instance ahead of time, before any live requests reach it. Warmup request App Engine attempts to detect when your app needs a new instance and initiates a warmup request to initialize it. Any new instances accept requests after they finish loading your apps code. New requests can then be handled faster. How it works Warmup requests are used by the App Engine scheduler, which controls autoscaling of instances based on your configuration. App Engine issues GET requests to: /_ah/warmup You can implement handlers for this request in your code to perform application-specific tasks like pre-caching app data. For most supported languages, just add the warmup element under the inbound_services directive in your app.yaml file. Then create a handler that will process requests that are sent to /_ah/warmup. Your handler should perform any warmup logic that is needed by your app. Walkthrough Let’s walk through a Go example. Our function performs the required set up steps for the application to function. It logs when an App Engine warmup request is used to create the new instance. These warmup steps happen in setup for consistency with cold start instances. Everything else remains the same. This setup function executes the per-instance one-time warmup and initialization actions. Finally, the indexHandler responds to requests with our greeting. // Sample warmup demonstrates usage of the /_ah/warmup handler. package main import ( "context" "fmt" "log" "net/http" "os" "time" "cloud.google.com/go/storage" ) var startupTime time.Time var client *storage.Client func main() { // Perform required setup steps for the application to function. // This assumes any returned error requires a new instance to be created. if err := setup(context.Background()); err != nil { log.Fatalf("setup: %v", err) } // Log when an appengine warmup request is used to create the new instance. // Warmup steps are taken in setup for consistency with "cold start" instances. http.HandleFunc("/_ah/warmup", func(w http.ResponseWriter, r *http.Request) { log.Println("warmup done") }) http.HandleFunc("/", indexHandler) port := os.Getenv("PORT") if port == "" { port = "8080" log.Printf("Defaulting to port %s", port) } log.Printf("Listening on port %s", port) if err := http.ListenAndServe(":"+port, nil); err != nil { log.Fatal(err) } } // setup executes per-instance one-time warmup and initialization actions. func setup(ctx context.Context) error { // Store the startup time of the server. startupTime = time.Now() // Initialize a Google Cloud Storage client. var err error if client, err = storage.NewClient(ctx); err != nil { return err } return nil } // indexHandler responds to requests with our greeting. func indexHandler(w http.ResponseWriter, r *http.Request) { if r.URL.Path != "/" { http.NotFound(w, r) return } uptime := time.Since(startupTime).Seconds() fmt.Fprintf(w, "Hello, World! Uptime: %.2fs ", uptime) } One thing to note is warmup requests don’t work in all cases, despite the best effort attempt to send requests to already warmed-up instances, when enabled. So you might still face loading requests, like if the instance is the first one being started up or if there’s a steep ramp-up in traffic. In these cases, you should use resident instances, which you can learn about below. Stickers for everyone! Once warm up requests were implemented, Critter Junction was able to reduce cold start instances during increases in traffic to their online shop once convention season was underway. Check out the documentation for language-specific steps on warmup requests. Stay tuned for what’s next for Critter Junction. And remember, always be architecting. Next steps and references:
https://medium.com/google-cloud/improve-app-engine-startup-times-through-warmup-requests-b424504bde14
['Stephanie Wong']
2020-10-01 21:20:11.001000+00:00
['Software Development', 'Google Cloud', 'App Engine', 'Google Cloud Platform', 'Cloud Computing']
Search Contents of a PDF File in SharePoint Online, Make them Searchable Using Microsoft Flow
(SharePoint Online) + Flow = (Salt+Pepper) Search Contents of a PDF File in SharePoint Online, Make them Searchable Using Microsoft Flow Sibeesh Venu Follow Mar 4 · 6 min read We all get stuck somewhere in our so-called “Programmer Life” for a small requirement. And I was stuck with such a requirement that the content of the PDF file uploaded to my SharePoint online is not searchable, however, the PDF I created manually from the Word document works fine. Sample OCR Flow Introduction We all get stuck somewhere in our so-called “Programmer Life” for a small requirement. And I was stuck with such a requirement that the content of the PDF file uploaded to my SharePoint online is not searchable, however, the PDF I created manually from the Word document works fine. Let me tell you why!. Typically there are 3 kinds of PDF files. Normal PDF: These are the files that you get from applications like Microsoft Word, Adobe tools, etc. The beauty of this file is that the content of this file can be searched, you can select the text in this file, style them and copy-paste, etc. Scanned PDF: This one is exactly opposite to the first one, and this was Villain in my requirement. The issue with this type is that though the content looks visually the same, it can not be searchable, select, copy-paste, etc, as in the end it is an image inserted to a PDF document. Now how can we read the contents of this file, that is where the technology called OCR (Optical Character Recognition) comes into the picture. With this, we can read the content, and make them searchable, etc. And when we do that, we introduce the third type of PDF file Searchable/OCRed PDF: It is the type that we get from the OCR process as an output. In the end, this type will have two-layer in it, one is the image that we get from a scanner, and the second is the text content. With this two-layer, this file becomes almost equal to the first kind Now let’s go see what was my requirement and how did I overcome this process. You can always read this article on my blog here. Background Technology is fast and starts running today if you want to touch it. I have a One Drive Sync folder to which I save the scanned PDF files from my scanner and once that is done the same will be synced to my SharePoint online. So far so good. But the problem is the content of these files are not searchable. Now let’s fix that. Fix to make Scanned PDF files searchable We use Microsoft Flow to do this process of converting the Scanned PDF to the Searchable PDF file. And in the flow, there are many ways that you can do this, I initially tried to do it with the combination of Computer Vision AI and some other services as preceding. Computer Vision AI in SharePoint But, I was not getting the expected output when I was using them. So, I decided to go with other options. If you are new with OCR technology or Computer Vision AI, you can find my article here. Create a flow The files are being synced to my Document folder in SharePoint, thus I needed to create a flow that gets triggered whenever there is a file uploaded. Create Flow Click on the “Create a flow” then you will be asked to select the flow template. I selected the template “When a new file is added in SharePoint, complete a custom action”. When a new file is added in SharePoint, complete a custom action Once you click on the Continue button, you are good to create new steps in your flow. Add Steps in Flow Flow is a step by step solution and some steps may be having an output that we can carry to the next step and in our flow, we use this a lot. Once you connect to the SharePoint site, we need to get the uploaded file properties, to do that, click on the +(plus) icon, select “Add an action” and then search for “Get File Properties” Get File Properties Step Now select the Site address and the library, and then click on the ID field, you will see an option to select the output of the previous step. The ID of the file created Now we get the file and need to check the file type right, to do that add a condition control and then add the conditions to it. Condition to check whether PDF or image Each condition will have an output as “Yes” or “No” and in the “Yes” part, we will add all of our other steps and we will not think about the “No” output now. But, you can think of adding some tasks there. Now in the “Yes” tab, we can get the file and pass it to the OCR process, that is where the tool called AquaForest comes into the story. Please follow the steps mentioned in this article and get the key needed. Once that is done, add the action “OCR PDF or Images” by searching the word ” AquaForest”. AquaForest OCR PDF or Images Give the connection a name and add the key in the next popup. There are many properties that you can set here, but the below two are important. File Content with OCR As an output of this step, we get the OCRed file and now all we have to do is to add the action called “Create File” and set up the same. Save the OCRed File Wow, now we have a Searchable PDF in our Document folder. Go search with any content of your newly updated PDF. If you wish, you can also create an action to send an acknowledgment mail. Send email step in Flow Testing the flow As we already created the flow, now it is time to test the same. To do that, I added a scanned document to my one drive folder. We can check the Flow running status in the portal. Run History of Flow Below is the sample run history output of my flow. Sample Flow Run History PDF OCR Conclusion Thanks a lot for staying with me for a long time and reading this article. I hope now you have learned about creating a flow in SharePoint online creating the steps in Flow use the connections in Flow OCR the PDF using Computer Vision OCR the PDF using AquaForest API creating a new File with OCRed output send mails from Flow If you have learned anything else from this article, please let me know in the comment section. Follow me If you like this article, consider following me, haha!. Your turn. What do you think? Thanks a lot for reading. Did I miss anything that you may think which is needed in this article? Could you find this post useful? Kindly do not forget to share your feedback. Kindest Regards Sibeesh Venu
https://medium.com/medialesson/search-contents-of-a-pdf-file-in-sharepoint-online-make-them-searchable-using-microsoft-flow-b59254c2d4fb
['Sibeesh Venu']
2020-03-14 08:18:23.437000+00:00
['Flow', 'Sharepoint Online', 'Ocr', 'Computer Vision', 'Sharepoint']
How to team: lessons in mountaineering and software engineering
I’m a mountaineer and software QA engineer, and would consider myself ‘intermediate’ as a technical practitioner in both disciplines. I started climbing and joined my first tech team 7 years ago, at 27. Never having ever worn a backpack with a hip belt nor written a single line of HTML, I embarked on two journeys that had enough in common that I excelled more in both together than if I had tried each thing on their own. Many lessons from mountaineering applied to technology, and vice versa. The following are comparisons between stages of a successful mountain climb and a software release cycle. Planning: Trip planning & Sprint planning Execution: The Climb & The Sprint You did it!: The Summit & The Commit Descent: Rappelling & Regression Expecting the unexpected Lessons learned: Debrief & Retrospective Do it again, better As they say, “off you go!” Scouting Three Fingered Jack, OR Planning: Trip planning & Sprint planning For a successful climb, planning is paramount. At a high level you want to know: Team size Team skill set Terrain Conditions Weather Gear needed Route beta Team size and skill set is often based around terrain, conditions, and gear. The team should have the skills required to complete the climb and be able to pick out and use the right tools for the job. In tech, we have similar things to consider when planning: Team size Team skill set The software Software dependencies Management’s feelings about the project (e.g. support or skepticism) Libraries and technology needed Discussions about upcoming challenges As a newer engineer, one of the most useful points is learning from more experienced technologists. This is the mountaineering equivalent to discussing ‘route beta’ with someone who has climbed what you plan to climb. Participating in informational interviews (a.k.a get coffee) and domain-specific meetups can be a useful strategy to find out how others respond to challenges, employ methods and strategies. It’s not at all cheating to get route beta from other climbers, as well as other technologists. Hearing about other people’s experiences make your challenges easier to plan, and seem much more possible to take on. Execution: The Climb & The Sprint Both a climb and a sprint have time constraints. In climbing, you have a mountain to climb. In a software sprint, you deliver software to customers. The climbing concerns of days, daylight, stamina, and technical skill are akin to the concerns of having 10 workdays, stamina, and technical skill of your software team. Teams in both areas identify members who either a) have or b) have been allotted the time to develop their skills to perform necessary tasks. Ideally , you’ve anticipated rough spots up front. Be them a 5.8 trad route 16 miles into your trip or a gnarly refactor you found out needs to happen half-way into the sprint. Here are four key activities for both climbs and software sprints: Know where you are Route-finding is a serious mountaineering challenge. This, arguably, can be the most difficult and overlooked skill in the sport. I’ve learned to know where I am at all times. Every junction, look at the map. Ask the questions: What cardinal direction am I headed? Distance to the next junction With the terrain, and my team’s projected stamina, how long will it take to get there? What landmarks are coming up? In the sprint, tracking progress is important to help you understand whether or not you will meet self/company-imposed deadlines. Am I making technical choices that will make future choices easier? How long do I expect to work on this task? With the technology we’re working with, and my teammate’s current speed and endurance, how long will upcoming tasks take? What do I expect to see when I finish the task (e.g. a working prototype, passing tests)? 2. “Do the best you can, where you are, with what you have” This is a saying in my mountaineering club, when it comes to wilderness first aid. First aid is not unlike fixing issues as they come up in a sprint. An unexpected issue arises, and things are taking longer than you might think. There’s a whole section on ‘Expect the unexpected’ below, that expands on this point. But I say this here because: your team is your team. You’re on the trip together. Make the best of it, work with the utmost integrity, and don’t kill yourself, or each other, in the process. 3. Anticipate where you fit in to upcoming challenges, but adapt as necessary On a team, each person has their own strengths. Many of our skills overlap, but in any given moment, you might want to plan who is going to lead, and who will assist. We knew the Bowling Alley on North Sister, OR would be tricky, but… yeah. Help needed in this technical section! :0 Sometimes, the leader or the assistant will be busy, or unavailable, when the anticipated challenge you planned comes up. That’s when it’s time for you to step in, exercise your own skills, and trust you’re ready. Most recently, on Mt. Washington in Oregon, I was neither a designated leader or assistant, but welcomed as a skilled team member. We were on the descent where the folks ahead of me were setting up the final rappel to the summit block. The wind ripped through the saddle. My teammates before me tried throwing the rope three times, each time it caught on the ragged, unforgiving red and rough volcanic rock below. When I reached them, and even though I wasn’t ‘supposed’ to manage this, I knew how to check the anchors, re-set the carabiners, coil the rope in saddle bags, and rappel down, feeding the rope through my harness so the wind wouldn’t be a factor. So I did it. With the checking and rechecking of my work with my teammates, for safety. It wasn’t against the rules to get the job done. And while the team could have descended without me, I made it faster, and more pleasant. In a similar vein, I recently helped with a project to ship a microservice to a customer who downloads our software, but we haven’t made that microservice available to all customers who download our software. Working with senior engineers, myself an associate, it turned out I was the first to have virtual machines ready to work on this proof-of-concept. While I wasn’t the leader, or the assistant, but a member of the team, I helped create, and prove out, the deliverable worked. In the end, I wrote out the documentation and helped lead QA and customer-facing teams to understand how the system worked so they could interface with the customer. While neither the tasks nor roles were initially assigned to me, and the job could have been done without me, I made it go faster, and …yes, more pleasant. Communicate You can’t tell, but we’re talking through which route has stable snow ON this snow bridge, Mt. Olympus, WA On the mountain, you need to communicate your flight plan. This is the plan between climbers for what they’ll say to each other at each pitch. Talking through the route and its dangers. Speaking up if you have a blister or are hungry. In software, discussing your current progress, even if you think what you’re saying is redundant or a given, is invaluable to your team, and management. It also gives you practice at communicating your choices, and gives others the opportunity to provide feedback at each step. You did it!: The Summit & The Commit These moments are what you’ve been working so hard for. You did it. You did it! You climbed a mountain! You merged your code into the engineering team’s common repository (e.g. master, develop)! Summit of North Sister, OR. What. a. climb. y’all. You can take a breath, enjoy your hard work, and feel on top of the world. You earned it. Have a snack! Go to the bathroom! Peeing on a mountain or attaching my name to a commit gives me the same satisfaction. As long as the glaciers don’t melt or the software doesn’t get thrown out, your signature, biological or virtual, is forever! Descent: Rappelling & Regression They say climbers are the only sportspeople that celebrate their success when they’re only half-way done. I’d argue tech people are pretty similar in that regard. You’re still on a mountain and your code hasn’t yet shipped to customers. It’s going down. Happy fun rappelling times after work on Rooster Rock! Columbia River Gorge, OR Rappelling BARK is an acronym for Belt, Anchor, Rappel device, Knot. Use BARK to check critical pieces of equipment for failure points when you’re ready to rappel. By ‘ready’ I mean, you’re doing it, you do not live on the mountain. If the belt of your climbing harness is not buckled, it will fall off under load. If your anchor is not secured, it will buckle under load. If your rappel device does not have the rope going through it, you won’t be anchored to the rope. The knot at the end of the rope will prevent you from rappelling off the end of that rope. ANY TIME YOU MAKE A CHANGE TO YOUR EQUIPMENT, YOU HAVE TO DO THE CHECK AGAIN. If someone distracts you while you’re doing your safety check? Do it again. Did you go to the bathroom on your way up, taking your harness off? Check your belt. Do you trust the person who set up the anchor, but are they hungry and aloof? The checklist is a fail-safe against the absolute worst-case scenario, even if you and your team are seasoned climbers. Critical Product Scenario The Critical Product Scenarios are important to test when you change anything about your product. The CPS represents the touch points in the steps a user takes to accomplish something with your software. Let’s say the user wants to buy something. ASAP is an acronym for Authenticate, Select, Add-to-cart, Purchase. Hey, that’s clever! Did I make that up? Anyway, with every release of your software, you need to regression test, or check your critical product scenarios for potential breakage. If the user can’t log in, they can’t save their cart. If they can’t click a link to the product, they can’t see it. If they can’t add it to their cart, they can’t buy it. If they can’t pay you, you don’t get paid. ANY TIME YOU MAKE SIGNIFICANT CHANGES TO YOUR SOFTWARE, YOU HAVE TO CHECK CRITICAL PRODUCT SCENARIOS. The reason I use all-caps is because it takes time, but you just can’t mess them up. Sorry-not-sorry if this task takes longer than your team wants, your managers, your product owners. Your users need you to make sure these user flows work every time. If they don’t work, actual people doing actual work with your software are screwed. Expecting the unexpected The difference between people I want to climb and develop with, and the people that I don’t, is their approach to the unexpected. It’s very easy to be irritated or angry or afraid or fail to be in control when things feel out of control. I get it! I can be that way too. Rock in crampons?! Not my favorite. Scary danger on Lane Peak, the Zipper, WA Being on the mountain, when something unexpected happens, you have to “do the best you can, with what you have, where you are.” I said it before, and it bears repeating. You need to ensure your own safety, the safety of your team, first. In software, you try your best to understand the consequences of the code you’re writing, but sometimes it interacts with services, libraries, users in ways you didn’t expect. You have to take care of your software, your users, and the emotional safety of your team members when your plans are interrupted. If you’re approaching something you don’t understand, go back or take it slow if you have the luxury of time. You can always return to basecamp, or stop your work progress, do more research and try it again. As a responsible team member, please consider the following when things get tough: If someone makes a mistake, help them If someone doesn’t know what to do next, help them If someone doesn’t understand core principles you thought they did, help them You are going to need help some day Everyone does If you need help, you have to ask for it Lessons learned: Debrief & Retrospective ‘To learn’ is to know something now you didn’t know before. Or to do something that didn’t work, and understand why, and do something different next time. Both after climbs and software sprints, the team does a debrief a.k.a retrospective in order to learn how to improve. What went well? Comradery Good attitude Seamless transitions Just before our ‘Roses, Thorns, Buds’ debrief after Mt. Olympus What went wrong beyond our control, and how did we respond? The weather turned at 10,000 feet, seeing a lenticular cloud growing with the wind turning up. We turned back, even though we all traveled 10 hours to get here. Proud of us for making the right decision. The frontend refactor broke the styling of another part of the application. The logic was so convoluted we couldn’t justify the time it would take to make the styling consistent and functional across the system. We had to stop. It was a hard decision, but we had to move on. We documented the issue for a future team to manage a frontend overhaul. What did we learn? The approach wiped us out more than we thought it would, so it took 10 hours instead of 7. By trying to move quickly, we made technical decisions we had to back out of because we could not make them work in time. How can we do better? We will train more in advance and bring much more food. To avoid making faulty technical decisions, we’ll justify doing more technical dives, so we know what we’re getting ourselves into before we jump in. Do it again, better. You’re going to climb more mountains. You’re going to release more software. You’re going to be a better climber, you’re going to be a better engineer. If you keep doing it, with people who can teach you and who you can teach, and be open to learning, you’ll get better. And while I believe everything I wrote, understand I will always be challenged, sometimes we come up short. But we’re learning. That’s how succeeding in mountaineering and tech works. I’ve climbed lots of mountains but yo, Mt. Olympus is my favorite!!! Thanks for reading! Remember, leave no trace out there! Pick up your garbage, and please delete old unused branches. :D
https://jensen-sara-e.medium.com/how-to-team-lessons-in-mountaineering-and-software-engineering-d9a1ec040ae0
['Sara Jensen']
2020-12-05 00:33:05.437000+00:00
['Climbing', 'Software Testing', 'Software Development', 'Mountaineering', 'Software Engineering']
Detect Faces With C# And Dlib In Only 40 Lines Of Code
A hot research area in computer vision is to build software that understands the human face. The most obvious application is Face Recognition, but we can also do lots of other cool stuff like Head Pose Estimation, Emotion Detection, Eye Gaze Detection, and Blink Detection. A cool example of facial analysis in real life is Chrysler’s self-driving car system called Supercruise. When the car is in self-driving mode, it wants the driver to have their attention on the road at all times. So when the driver looks away or appears to fall asleep, an alarm sounds right away. How does the car pull this off? There’s a little camera on the steering wheel pointed at the driver, and the car software performs face analysis in real time. Building face analysis apps is surprisingly easy. There are amazing computer vision libraries available that make building computer vision apps a breeze. In this article I’ll use Dlib. This is the go-to library for face detection. It’s intended for C and C++ projects, but Takuya Takeuchi has created a NuGet package called DlibDotNet that exposes the complete Dlib API to C#. I am going to build an app that can detect all faces that are visible in any image. I’ll use C#, Dlib, DlibDotNet, and NET Core v3, and try to achieve my goal with the minimum of code. NET Core is the Microsoft multi-platform NET Framework that runs on Windows, OS/X, and Linux. It’s the future of cross-platform NET development. I’ll use the following image to test my app: This is the famous selfie Ellen DeGeneres took at the Oscars in 2014. It’s a great test image because everybody is looking at the camera and we have a couple of celebrities with their face only partly visible. Save this image as ‘input.jpg’. Let’s get started. Here’s how to set up a new console project in NET Core: $ dotnet new console -o DetectFaces $ cd DetectFaces Next, I need to install the ML.NET packages I need: $ dotnet add package DlibDotNet That was easy! This single NuGet package installs Dlib and the DlibDotNet wrapper, and sets everything up for your operating system. If you’re working on a Mac like me, you’ll have to perform one extra step. DlibDotNet requires the XQuartz library but it’s not installed by default on a clean OS/X system. You can easily install XQuartz with homebrew: $ brew cask install xquartz And that’s it. Now I’m ready to add some code. Here’s what Program.cs should look like: The Dlib.GetFrontalFaceDetector method loads a face detector that’s optimized for frontal faces: people looking straight at the camera. Next, I have to load the image and perform face detection: The Dlib.LoadImage<RgbPixel> method loads the image in memory with interleaved color channels. The Operator method then performs face detection on the image. The faces variable now holds an array of Rectangle structs. Each rectangle describes where the face detector found a face in the image. To highlight the detection results I’m calling Dlib.DrawRectangle to draw a rectangle on the image at the location of each face. The final step is to save the modified image. Add this to the end of the Main method: This uses the Dlib.SaveJpeg method to save the image as output.jpg. And that’s it! You can run this app on Linux, OS/X or Windows with Visual Studio Code:
https://medium.com/machinelearningadvantage/detect-faces-with-c-and-dlib-in-only-40-lines-of-code-f0bb1b929133
['Mark Farragher']
2019-06-07 14:31:18.547000+00:00
['Programming', 'Dotnet', 'Dotnet Core', 'Facedetection', 'Computer Vision']
Top 5 Facebook Challenges I Want To See & Take Part In
№1 — Five Of The Most Absurd & Dangerous Places You’ve Shagged In: One location per day for five consecutive days. A lot of explanations, with as many diagrams, as you feel necessary to set the scene. All the juicy details, including a photo of your absurd and dangerous partner, and what particular positions you may have gotten yourself into. The inspiration for choosing that specific venue. And a note at the end detailing any complications to shaggin’ in that particular spot. Eg. Bum rash. Cooties, Chased by a knife-wielding husband or a wife, etc. №2 — Five Of The Strangest Locations You’ve Relieved Yourself At: One destination per day for five consecutive days. We’ve all found ourselves in that, “I have to go, now!” situation. Maybe it’s a long car drive. Perhaps you had a few too many espresso’s before your date, and suddenly your pea-shaped prostrate starts protesting just as you’re about to board that ferry to go across to that delightful little island that you know for a fact has no washrooms. Whatever the reason. Explain why you had to go where you went. Was it a number one or a number two situation? Did you manage to relieve yourself without anyone seeing or catching you? And have you become addicted to the thrill of relieving yourself in locations other than the pre-approved ones? №3 — Five Of The Biggest Lies You’ve Given To Get Out Of Work: We’ve all done it, especially if you had any sort of job at a shopping mall. And if you say you haven’t, I won’t believe you. You’re either lying, or you’re just about to tell a whopper. One lie you’ve told per day for five consecutive days. The best ones involve dead relatives with rare and exotic ailments that sound Hungarian in origin. All the reasons why you told a lie. How crap your job was. How many times you used the lie or variations of it. If you ever got caught telling it. And most importantly, if you ever asked someone else to say the lie for you. №2 — Five Of The Most Inappropriate Locations You’ve Passed Gas At: If you’re human, you’ve done this. Chances are it’s been somewhere you shouldn’t have. One poorly timed fart at one embarrassing location per day for five days. Everyone has thought, “I’ll just slip this one out. I’m all alone.” And then, of course, three people step into the elevator on the next floor just as your bouquet ripens. But that’s normal. That’s bloody boring ole bean. For the purposes of this challenge, we want genuinely inappropriate. In bed, while making love for the first time, would be a fine example. Why you couldn’t hold it. How long your gas expulsion lasted. The level of ripeness on a scale of 1–10, with 1 being chemical weapon-grade and 10 being a slice of Danish cheese. №1 — Five Ex-Boyfriends/Girfriends You Wished Would Get Bad Acne: Yes, yes, I know we’re all in this together and all that other mother jazz, but really. There are people out there who’ve ground your poor beating heart down into a fine mist and deserve their acne. These are the ones we want to read about. One ex per day for five consecutive days. Tell us why you want your ex to break out like the surface of the moon. What did they do to deserve your pimply wish? The more detail, the better. Think of it as a cathartic revenge exercise, but with everyone on social media sharing in your therapy.
https://medium.com/the-haven/top-5-facebook-challenges-i-want-to-see-take-part-in-97973ac05158
['Mr. Francis']
2020-09-17 20:09:47.101000+00:00
['Challenge', 'Facebook', 'Humor', 'Social Media', 'Covid 19']
Standing against the law for justice
Over the past few years I’ve met and encouraged many people facing their first ever arrest. I’m not a drug dealer or a member of a pickpocket gang. The people choosing to break the law and possibly ruin their clean records have included students, professionals and retirees, but the one thing they have in common is their concern for the future of humanity and life on this planet. They are young people wondering how they will survive in a changed climate and elders fearful for their grandchildren. We are the growing number of generally law-abiding citizens who realise that if we allow our governments and corporations to continue to operate in the way that they are, we’re all screwed. Our path to destruction needs to be diverted pretty much immediately if the current generation is to have any hope at all of enjoying a peaceful old age. Voting or signing petitions will not be enough to achieve the rapid change of direction that is necessary. The only way to stop our leaders dragging us to our annihilation is to put our bodies on the line and resist. Each time I see someone take the initial decision to deliberately risk arrest for the sake of our collective existence, I’m reminded of my first time. When we good girls and boys overcome our deeply ingrained instinct to do whatever a police officer says, it is always a moment of triumph. It’s one of the most empowering choices an ordinary human can make. The scales fall from our eyes and we understand that the future belongs to all of us. The idiots can only control us if we let them. I think I’d been waiting for that moment all my life, although I didn’t know it before then. It was about twenty years ago, when a company was trying to build a shale oil plant in my state of Queensland. It would have been the most carbon intensive way of producing electricity possible and would have needed huge subsidies from the state and federal governments to be even remotely viable. In summary, it was totally insane. So nearly fifty of us descended on the plant, locking on to all the entrances and climbing the conveyor belt to close down operations. I was attached to a gate with a bicycle D-lock around my neck, blocking access near the top of the conveyor, while others suspended themselves from the machinery so it could not be used. We had planned to stay there for up to five days, but by the end of the first day specially trained police had arrived from Brisbane and had broken their way through those on the perimeter gates. They tried to persuade me to unlock myself but since I refused they were forced to cut the lock with an angle grinder. It might seem counterintuitive, but as the sparks flew around my neck and face I think I felt more calm and in control than I ever had before. Events were unfolding exactly as I had decided they should. After a couple more years of similar actions and mounting public pressure we won the campaign and the company gave up their plans. A different company is now trying to resurrect the crazy scheme, so we may have to begin the fight again. We never have to accept our elected leaders spending our money on unjustified wars or fossil fuel developments which will condemn our planet to endless natural disasters and an unliveable climate. The people do have the power. Ultimately, they have to listen to us and when they see we value life over our own short-term comfort and security, they will know we have nothing to lose. When the police tell someone that they will be arrested unless they move on, unlock themselves from the conveyor belt or come down from the tripod, they are accustomed to people obeying. When you tell them that you understand but that the campaign is more important than your personal freedom, you are the one in charge. When you show them that you are not afraid, and that you actually desire the issue to be taken to court because you know you are on the right side of justice, you become the one in control of the situation. Those in power are scared. In many countries, people are rising up to protect their land and water, and authorities are responding with draconian laws designed to intimidate us into submission. In Australia they are trying to limit our freedom of association and punishing people for exposing our government’s shameful secrets. In the USA police are shooting and killing innocent people. Their tactics are not going to work because the stakes are too high. When we peacefully resist against efforts to destroy our natural environment, to divide us with hatred and to silence our protest we will win in the end because we are right. When we stand together, without fear, in the defence of life, we must prevail. The first time you stand against the law for justice will probably not be the last.
https://emmabriggs.medium.com/the-first-time-standing-against-the-law-ed751cefe2df
['Emma Briggs']
2019-04-07 20:58:31.795000+00:00
['Justice', 'Future', 'Law', 'Climate Change', 'Activism']
Using SoloLearn While Studying Data Science at Harvard
By any number of metrics and rankings, the data science program at Harvard University has long been considered the premiere option for individuals looking to pursue careers in the field. As with any program at a prestigious Ivy League school, Harvard offers a combination of top-of-their-field instructors, outstanding networking potential, and practical and applied coursework that churn out skilled data science professionals year after year. As with any demanding academic program, what happens inside the classroom is essential, but is not the only opportunity for you to maximize your education during your time in school. Data science is a tough field — that’s why there’s so much demand for qualified data science professionals. As a result, data science programs are often considered among the most demanding of any postgraduate field of study. This is where SoloLearn comes in — with a suite of tools designed to help students gain additional practice opportunities, feedback, and answer questions that may not be covered during traditional studies, the SoloLearn app is the ideal way to “upgrade” your time studying data science at Harvard, and offers an invaluable resource right in your pocket to help you navigate the toughest aspects of your coursework. In this guide we will first break down what to expect when entering the data science program at Harvard. Then, we will walk you through the many tools that SoloLearn can offer you to support your studies. What To Expect From Harvard’s Data Science Program In the words of the university, Harvard’s data science program “provides an opportunity for students to gain advanced quantitative methods skills while learning how to apply them to the most interesting and immediate social science questions”. In other words, Harvard’s program is lauded for offering a combination of technical knowledge and proof of application — giving students the chance to apply what they learn in data science to actual questions and problems in the field. The program is centered around a “Foundations of Data Science” requirement, which requires prospective students to couple their instruction with a series of dedicated methods courses (with some flexibility among specific courses to choose from) designed specifically for “students who have a strong interest in the social sciences but also want to gain the increasingly necessary skills of data analysis”. There are several tangible and intangible benefits to this approach to teaching data science: Many novice data scientists struggle with developing the theoretical questions that allow them to apply their learning. Harvard’s focus on applications gives students the opportunity for hands-on experience with this practice, before ever entering the workforce. The applied/methods approach also allows Harvard to draw on its deep and talented alumni base, many of whom hold senior positions at the biggest names in tech, including Apple, Google, Facebook, and others. This integration with actual corporations and current experienced data scientists also offers students invaluable networking opportunities. The data science program allows students to cultivate job leads and internships well before their studies are completed. But what courses would you take? While the data science program’s website can offer an exhaustive list, a few examples include: Foundations of Data Science course list: Stat 139 (Linear Models) CS 109a/Stat 121a (Data Science 1: Intro to Data Science) Advanced Methods course list: Gov 1005 (Data) Gov 1006 (Models) Gov 2001 (Advanced Quantitative Research Methodology) Gov 2002/Stat 186 (Causal Inference) Gov 2003 (Topics in Quantitative Methodology) So now that you know what you can expect from the program, let’s explore how SoloLearn fits in.
https://medium.com/sololearn/using-sololearn-while-studying-data-science-at-harvard-704cb85a2373
['Anais Gyulbudaghyan']
2020-09-15 17:35:53.662000+00:00
['Coding', 'Harvard', 'Data Science', 'Python', 'Sololearn']
#ClimateWednesday Launch Tweet-Chat Series On #ClimateFinanceNG
Climate finance is critical to addressing climate change because large-scale investments are required to significantly reduce emissions, notably in sectors that emit large quantities of greenhouse gases. Climate finance is equally important for adaptation, for which significant financial resources will be similarly required to allow societies and economies to adapt to the adverse effects and reduce the impacts of climate change. It is important for all governments and stakeholders to understand and assess the financial needs of developing countries, as well as to understand how these financial resources can be mobilized. Provision of resources should also aim to achieve a balance between adaptation and mitigation. Overall, efforts under the Paris Agreement are guided by its aim of making finance flows consistent with a pathway towards low greenhouse gas emissions and climate-resilient development. Assessing progress in provision and mobilization of support is also part of the global stocktake under the Agreement. The Paris Agreement also places emphasis on the transparency and enhanced predictability of financial support. #ClimateFinance International Climate Change Development Initiative will be having a tweet chat series on #ClimateFinanceNG — this will help stakeholders to understand more about 1. Attracting Funding For Climate Adaptation and Mitigation 2. Bridging the Investment Gap for Nationally Determined Contributions (NDCs) 3. Examining Nigeria Readiness for Sustainable Transition 4. Assessing Nigeria’s Potential for Bankable Renewable Energy Transactions 5. Mainstreaming Transition to Solar Power and Zero Carbon Emissions #ClimateWednesday tweet chat series on #ClimateFinanceNG will be starting this Wednesday. Stay connected.
https://medium.com/climatewed/climatewednesday-launch-tweet-chat-series-on-climatefinanceng-5d809521380f
['Iccdi Africa']
2020-09-15 15:24:13.436000+00:00
['Climate Wednesday', 'Climate Finance', 'Wash', 'Climate Change', 'Renewable Energy']
Change Your Facebook Profile to Support Black America
Change Your Facebook Profile to Support Black America Black Lives Matter. Be an Ally. Click here to create your picture. This week we watched two black men be murdered by police, but no prayers came. There will be 305 more in the next 365 days. Time to demand change. It is clear who matters to the leadership at Facebook. It is clear who matters to the Justice Department of the United States. It is clear who matters to the law enforcers that our tax dollars go to. Those people do not include those who look like me. Justice will not be served until those who are unaffected are as outraged as those who are. — Benjamin Franklin During the ISIS attacks on Paris, Facebook was quick to create a feature for users to change their profile pictures in or to done the flag of France. Not too long after that, the world was made aware of thousands of people killed by terrorists in northwest Africa, but the crickets were louder. Facebook made it clear who mattered to them. Since Facebook doesn’t care enough about Black America to pray for us in our time of darkness, we must pray for ourselves; we must support ourselves; we must stand as one against the injustices that plague our communities. Change your Facebook profile picture to show your support for Black Lives in America. Black Lives Matter. Be an Ally. I’m not Dr. Martin Luther King. I don’t have the platform to rally millions behind me. However, I am a writer. My passion is in developing software to enable people. Since I am just one person, I took the time tonight to create a quick webapp to enable anybody who cares about the freedom of people of color (POC) to make it clear to their community that they support justice for all people by changing their Facebook profile picture. You don’t have to be black to be outraged. Remember that you don’t have to be black to have empathy for your fellow citizens. If you don’t know how to show you support the POC communities, feel free to change your profile picture in solidarity with us. Change your Facebook profile picture to show your support for Black Lives in America. Black Lives Matter. Be an Ally. Click here to create your picture. It takes 20 seconds.
https://medium.com/thsppl/facebook-prayed-for-paris-lets-pray-for-black-america-bc4a4cf55df5
['Lincoln W Daniel']
2017-07-07 20:59:09.874000+00:00
['BlackLivesMatter', 'Facebook', 'Politics And Protest', 'Racism']
How Men Can Support Women’s Rights by Being There for the Little Things
Often the biggest impact starts on a small scale. Sometimes that means one person raising their voice, sometimes it’s one person saying no, and sometimes it’s someone making an effort to change the game. Whether it’s coming from a mother or father or both, gender equality can start early — by teaching girls from a young age that they are strong, smart, and enough. That girls are just as capable as boys, and just as much is expected of them. And that their voice and opinions are just as important. The truth is, you don’t have to be loud to make a difference. And women’s rights isn’t just a women’s issue — men can have a part to play too. It could be small steps like a father teaching his daughter to be assertive and stand up for herself, or teaching her skills that typically only boys were expected to have in the past. It can mean encouraging more girls to get outside, work on cars, study science, math, and tech. Or it can mean allowing boys to talk about emotion and show that they care, rather than continuing the unfortunate idea that “real men don’t cry”. And ultimately, it’s teaching both boys and girls about respect.
https://medium.com/age-of-awareness/how-men-can-support-womens-rights-by-being-there-for-the-little-things-dc3c85e57125
['Samantha Blake']
2020-12-02 03:38:06.628000+00:00
['Education', 'Family', 'Society', 'Feminism', 'Relationships']
The 3 Virtues of Good Programmers
Virtue #1: Laziness It’s telling that computer science is the only field where “lazy” is a technical term. Programmers have a lot of work to get done. Logically, the best way to address a large set of work tasks is to get rid of any of them that you don’t really need to do. Next, try to get rid of any repetitive tasks. If there’s one thing we hate, it’s routine. We don’t want to repeat the same keystrokes if it’s possible to write a script to do it for us. Why? Because we’re lazy — and that’s a good thing. Laziness is a result of our desire for efficiency. In some cases, this can be a bit of a cultural miscommunication. Sometimes programmers are judged for how much code we produce. For managers from non-technical backgrounds, a programmer who works really hard to produce an enormous amount of code might be seen as the hardest worker in the office. But a clever programmer knows that producing a lot of code might just mean you’re being inefficient. It’s much better to come up with a clever, low-effort solution, rather than reinventing the wheel or over-engineering something. Why spend hours writing code you don’t need?
https://medium.com/better-programming/here-are-the-three-virtues-of-good-programmers-e561e061ea19
['Robert Quinlivan']
2019-12-13 17:57:17.299000+00:00
['Career', 'Programming', 'Software Development', 'Startup']
A New Future for Java
The “Kotlinisation” of Java Some of the upcoming Java features are going to be a massive improvement for Java in terms of readability and in terms of improving one of the main weaknesses in Java, its verbosity. We could affirm that they all have a suspicious similarity to some Kotlin features. Please keep in mind that most of these features are feature previews. What that means is that if you install JDK 14 or JDK 15 when it gets released, you won’t be able to use them by default. Java feature previews are new features that are introduced in a release but are disabled by default. They’re included in the release just to gather feedback from the community of developers, so they’re still subject to changes. That’s why it’s not recommended to use them in production code. To enable them at compilation time, you’d have to do the following: javac --enable-preview --release 14 If you want to enable them at runtime, you’d have to run the following: java --enable-preview YourClass Of course, you can also enable them in your IDE, but be careful to not enable previews by default in all your new projects! Let’s take a look at those changes that are going to have a bigger impact on our coding with future versions of Java. Java records Java records are a feature that many of us have been demanding for a long time. I guess you’ve been in the same situation multiple times, when you reluctantly have to implement toString , hashCode , equals , and also getters for each existing field. (I’m assuming that you’re no longer using setters, and you definitely should not.) Kotlin provides data classes to solve this problem, and Java is intending to do the same by releasing the record classes, something that Scala also has with its case classes. The main purpose of these classes is to hold immutable data in an object. Let’s take a look at how much better will it be in Java by looking at an example. This is how much code we’d have to write to be able to instantiate and compare our Employee class: And also the Address object it contains: That’s a lot of code for something so simple, isn’t it? Let’s take a look now at how it will look with the new Java records: And also our Address class again: This is exactly the same we wrote earlier with so much code. You will have to agree with me on this: This is amazing, the amount of code we’re going to save and how much simpler it is! Let’s see now what the differences are in the new switch statement. Improved switch statement The new switch statements in Java solve some of the inherent problems of using switch statements in Java. Switch statements at this moment should always be avoided because they’re very error-prone and lead to code duplication. Currently, it’s very easy to leave a case uncovered, for example. With the new switch statement, that problem is solved because it won’t compile if our switch statement doesn’t cover the whole range of the domain that belongs to the type we pass in to the switch. To explain this with an example, we’re going to create a DayOfTheWeek enum in Java: Having that, our switch is going to tell us what position in the week corresponds to that day. Let’s see first how can we currently do this using Java 11. Using the current switch statement, we’d have to make use of a variable, and also, if we miss one of the days of the week, our code will compile perfectly. This is one of the problems of switch statements: It’s very error-prone. So how does Java 14 improve the situation? Let’s quickly see how: You will be able to see very quickly that the new switch statements can be used as an expression, not only as a statement. The result is more concise and expressive. That would be enough to convince many of us to use them, but one of the main improvements is that now switch statements won’t compile if we don’t cover all the cases in our switch. It will show us the following error, for example: Error:(9, 24) java: the switch expression does not cover all possible input values From now on, it’ll be impossible to miss a case in our switch statements. That’s awesome, isn’t it? This is very similar to Kotlin when statements, which you can read about in the documentation. Let’s also take a look at the new text blocks. Text blocks Have you ever moaned about how ugly and difficult it was to assign a big blob of JSON to a variable in Java? Java will introduce multi-line strings that you can define by wrapping them between triple quotes. When this feature gets officially released, it’ll be much easier to define long strings in multiple lines. Let’s take a look at the differences between the two modes. At the moment, if we want to store a formatted JSON in a variable, it looks as ugly as this: On the other hand, when new text blocks get released, it’ll be as easy and clean as this: I think that’s much better, don’t you agree? This is also something supported in Kotlin, as you can find in its type definitions. So we’ve seen that Java is “inheriting” many of the solutions to its own problems from one of its competitors: Kotlin. We don’t know if this time Oracle has reacted right in time to combat the rise of Kotlin or if maybe it comes too late. Personally, I think that Java is taking the right steps forward, even if these changes were triggered in some way by its competitors and might come a bit late. As mentioned earlier, if this article has triggered your interest in learning the Kotlin language, I’d recommend that you read “Kotlin in Action,” a very good book for Java developers starting in Kotlin.
https://medium.com/better-programming/a-new-future-for-java-b10a6789f962
['The Bored Dev']
2020-08-06 16:36:53.944000+00:00
['Software Development', 'Android', 'Mobile', 'Java', 'Programming']
5 companies leading the way in datafood — and why we believe radical change is needed in the food-system
Gone are the days when data science was strictly the domain of the data scientist, engineer or technologist. We’ve moved forward to the age of the data entrepreneur, and with that mindset, into collaboration with businesses at the intersection of innovation. Most days you’ll find me here, the mediator and advisor between data scientists, analysts, business leaders and innovators. For me, it’s the sweet spot — helping find solutions to real world problems, and discovering opportunities for growth. It’s also afforded me a wide (and growing) knowledge of various industries — from professional boxing to transport, energy and urban planning. But as you might know, I especially focus on the food and agriculture space, and I’m continually surprised at the level of creativity and determination towards bettering the industry that I see. Here are 5 companies leading the way in #datafood — here in the Netherlands, and beyond — and why. Connecterra I’ve known Connecterra for a very long time now, and they really are a progressive example of how combining AI and humans (mostly farmers) can help create a more humane and profitable food system. Since the beginning of JADS we have been working with Sicco van Pier Gosliga (Head of R&D) in our Data Entrepreneurship in Action program. This collaboration turned more formal when Sicco became a JADS fellow this year. Through their AI technology, named Ida, Connecterra are helping dairy farms run 30% more efficiently. And they really aren’t trying to replace the role of humans with technology, but rather, dub Ida as a ‘farmer’s assistant’ — effectively helping and elevating the role of the farmer. “We innovate for a purpose. Our core competency is building AI’s that will impact the future of our planet,” says Connecterra. So how does it work? By using monitoring tools like sensor cubes that go each cow’s neck, and machine learning technology. Ida monitors and learns the behaviour of farmers and dairy cows, sending the data to the cloud for processing and analysing, before providing guidance on how to function more effectively. Of course, these insights — patterns, trends and correlations — have a flow on effect, including reducing the environmental impact of livestock farming, and maintaining the livelihood of farmers. Not only are Connecterra saving time and dollars for the dairy farming industry, they’re helping to boost animal welfare, which can lead to a better tasting product. Their data management software monitors milk production, as well as how regularly a cow needs veterinary attention. Over time, with Ida and machine learning, insights aid cost offsetting and animal care decisions. They explain: “In other industries, deploying artificial intelligence like this is “predictive maintenance.” Take a factory’s machines. Diagnostics identify when a bearing needs repacking, or a seal replacing, suggesting action before the part fails. By extending those same principles to cattle health — mastitis, for example — we are alerted before problems become serious. Prevention is better than cure.” Sicco van Pier Gosliga, R&D director Connecterra Het Familie Varken Similarly, a happy family of pigs living their best life together is what drives Het Familie Varken. Keeping it close to home and focused on ethical farming, they’re currently building their first ethical stable in Boekel, North Brabant. “It started with the question: can we offer pigs a pleasant life in a pigsty? Because pigs also experience stress. And that is neither pleasant nor healthy!” A real collaboration, they’re working with pig farmers, scientists, technologists, architects and builders, food supplier and genetics, to create an environment for pigs that is stress-free. The driving force is director Tjacko Sijpkens, one of the most inspirational leaders you’ll meet, in my opinion. No one has put it more succinctly than Tjacko; “The food industry has lost its mandate from society as a whole, to supply food. Full stop. The food industry needs radical change.” How do they know their pigs are happy? They’ve done a considerable amount of research along with Inonge Reimert of Wageningen University, who has been studying pig behaviour for years. Techniques include gathering data through filming the pigs and taking photos of the family pigs’ faces at regular intervals — because their facial expressions change if they are stressed, just as a humans’ faces do. Through their data entrepreneurship and a strong mandate to work closely with nature, Het Familie Varken are paving the way as an industry leader. “The natural behavior of the pig is central in our design. We believe that nature is our best friend and by itself a very intelligent system, knowing best about vitality and growing. Data will help capture this valuable knowledge,” Tjacko Sijpkens, CEO Het Familie Varken. “The sort of data & sensors in the farm are all concentrated on animal behavior, conversion & growth, and all sorts of living conditions. Key to this data gathering will be the developed ear tag that measures 24/7 every movement of every pig. “We are also keen on cross linking the data with our consortium partners through the total value chain (genetics-food-farm-butcher-retail-consumer). The leader of the industry is the one who best knows about these interrelations and how to market them to the consumer.” No doubt their pigs are living happy lives as nature intended. And when it comes to the consumer, much like Connecterra, Het Familie Varken recognise the positive impact a happy animal’s life has on the quality and taste of the meat it produces. Protix Still in the farming realm, this one, frankly, could be a game changer. Protix is for the planet, and their focus is on working towards a circular food system — one that’s in balance with nature (there’s a real theme here). And the driving force of the system? Insects. Touted by some as the food of the future, Protix is innovating farming as we know it, using data science and “verifiable and scalable insect breeding” to create a humane, sustainable food system for the future. Insects are unique. As Protix says, they’re “powerful upcyclers and the missing link in our food system.” They also “have the amazing ability to turn low-grade food waste into valuable high-end proteins and fats.” And they can do this fast, with a low impact on resources (one tonne of insects can be grown in six days using a land area of only 20 square metres). In comparison, traditional farming for sources of protein such as meat and soy use up much larger amounts of water and land, having a great impact on ecosystems. So, the benefits are many and diverse. A key player in Protix’s vision is the black soldier fly. The fly’s larvae provide a unique source of protein for food and feed. Then, not only can insects become part of a high-protein, healthy and fair diet for humans, when they die, they become part of a circular food chain — sustainable (and still high-protein) animal foodstock for fish and chickens. In the case of the chicken, this also leads to better tasting eggs, and allows the chicken to return to more natural behaviour — again, bringing in the added element of animal welfare. Managing all of this takes dedicated partners, along with high-tech farming solutions, AI, genetic improvement programs and robotics. This technology helps Protix produce consistently, at the right quality and with reliable output — factors that are key to really changing the food system (giving a new meaning to strength in numbers). Who’d have thought insects (and smart data) could be the solution? Brilliant. Kishan Vasani, Co-Founder & CEO at Spoonshot Spoonshot Companies like Spoonshot are also in a league of their own — this time enhancing the customer experience through guest intelligence and taste insights. They use AI to predict consumer taste and food trends through food science, helping companies stay informed, relevant and competitive. Their insights also aid company product development — helping food and beverage businesses unlock new opportunities for the future, which can enhance customer engagement, satisfaction and retention. Beyond collecting customer feedback and sentiment, Spoonshot are now offering a Beta version of their Genesis tool (to be released this June), a platform designed “to be the creative spark for early stage innovation, ideation and inspiration for FMCG (fast moving consumer goods) companies.” The power here is in prediction. “The technology behind the platform leverages a unique data set and proprietary algorithms to help FMCG companies stay on top and ahead of trends. The platform provides personalised insights, where our predictions present innovation opportunities, and where novelty is the driving force powering insights,” says Co-Founder and CEO Kishan Vasani. Kishan explains the data gathering process: “Essentially, we leverage a long-tail of open, alternative data. That’s everything from regional food news portals to specialist research institutions. From this, we build proprietary structured data sets, connecting this data using machine learning, drawing signals and casualty. With these knowledge repositories, our goal is to ultimately replicate human cognition in the domain of food. “The purpose of Genesis is to assist in a process leading to more successful product innovation. Essentially, bringing a closer alignment between consumer needs and the choices they are presented with. Other benefits address the needs of the FMCG companies themselves, such as being able to make sense of noise in data, and being given early indicators of future trends.” Consider flavour combinations that product designers and innovators might not have thought of yet — what are they, and would anyone be interested? Spoonshot’s example involves Nutella and soy. Perhaps they could be looking at where insects are in demand or abundance next? August de Vocht, CEO NoFoodWasted NoFoodWasted This one seems like a no-brainer to me. An app that maps products approaching their expiration date, noting what level of discount is available — saving customers money while reducing food waste. NoFoodWasted engages both restaurants and supermarkets looking to act more responsibly, and they already have over 150 participating retailers. How does it work? “NoFoodWasted was the first company to use data to change the behavior of consumers to reduce food waste. Because of the inventorization we made of the available best before date products, we knew exactly what was available at different local supermarkets closest to our users. This combined with insights on the grocery list 75% of consumers make, gave us great insight to help change consumers buying behaviour — from purchasing products with a long shelf life to those that are nearing the end of their best before date,” says Founder August de Vocht. The app uses sensors from mobile phones to gain insights on food waste from connected suppliers and consumers. This includes beacon technology and AI, and they’re planning to implement more computer science in the backend to make the app even smarter. “Smarter in terms of aligning better with what the market wants, what the market wants to pay, and what producers’ needs are.” The team is also working on a solution to indicate how much food the consumer is using and how much they need. “When we have the complete data on what the customer needs, we can bring this information to the market so that we all can produce less food, and that is the only solution against food waste.” August de Vocht, CEO NoFoodWasted There you have it — just 5 of the companies making waves in the datafood space. I’m certain these innovators are onto something good, and I’m looking forward to being a part of it — working on real life case studies through the JADS Data Science and Entrepreneurship program and in our Proof of Concept lab. This article is written in collaboration with Jai Morton.
https://towardsdatascience.com/5-companies-leading-the-way-in-datafood-and-why-we-believe-radical-change-is-needed-in-the-4eb9faedb00e
['Arjan Haring']
2019-04-11 21:09:19.782000+00:00
['AI', 'Data Science', 'Change', 'Food', 'Agriculture']
The Smartest Person in the World That You Have Never Heard Of
Portrait of Leonhard Euler by Jakob Emanuel Handmann (1753) Unless you study Mathematics or Physics at a university level it is very unlikely that you have ever heard of Leonhard Euler (1707–1783). But as many of the people who know who he is would tell you, his impact on the scientific world cannot be overlooked. Similar to DaVinci, Euler didn’t focus on only one facet of science. He was known for his work in optics, astronomy, music theory, mechanics and fluid dynamics with all his work in these subjects shaping them for years to come as well as some of his theories still being used today. The magnitude the number of his discoveries meant that most weren’t named after him, but after the first person to prove his theories to avoid naming everything after him. From his birth, his intellect was clear to some, but due to his father being a cleric he wanted Euler to follow a path in theology. His saving grace was Johann Bernoulli which gained contact to young Euler through his father being a family friend of the Bernoulli family. Johann Bernoulli was known to be Europe’s best mathematician at the time and under his guidance after convincing his father that young Euler was headed towards a great career in maths, he tutored him every Sunday where his skill became ever more apparent. Problems with his Eyesight His scientific feats become even more impressive when we put his deterioration of eyesight later in his adulthood into perspective. In 1738 he became almost blind in his right eye which rather than blaming on a recurring 3-year fever he would blame on the cartography work he performed for the University of Saint Petersburg. After moving to Germany due to the conditions at the University of Saint Petersburg deteriorating as censorship became more and more prone, his eyesight worsened with him developing a cataract in his left eye which was his only working eye at the time later into his stay in Germany leading to him becoming blind. To this he remarked: “Now I will have fewer distractions.” — Leonhard Euler Instead of being put off by this his productivity actually increased with the help of some scribes which allowed him to increase his production of mathematical papers to a one-a-day basis which highlighted his perseverance and his affinity for mathematics. Under Catherine the Great, Euler was persuaded to move back to Russia with a promise of a very large paycheck and pension as well as high positions in government for all of his children. He accepted and spent the rest of his days under the prospering Russia Empire led by Catherine. He would later die of a brain haemorrhage while discussing the discovery of a new planet called Uranus, ending his life as he lived it, enquiring into science. Conclusion The lack of coverage of Euler in pop culture in favour of scientists like Einstein and Newton seems strange to me as his achievements were monumental and in some aspects outweigh the achievements of either Newton and Einstein in terms of volume of scientific discoveries. But due to many of his achievements being named after the people who confirmed them rather than after Euler himself his popularity seems to lack if we consider his impact on the scientific world. Overall everyone can learn from Euler’s work philosophy and how he applied himself at a subject he excelled at as well as the importance of recognising talent at a young age allowing for its proper development to reach its potential. Without the help of Johann Bernoulli Euler might’ve never ended up with his love for mathematics as he had with his help and due to his father’s pressure on him following a path in theology, his intellect might’ve been wasted or applied in ways he wouldn’t reach his full potential. We cannot play down his perseverance after his loss of eyesight as not only did he disregard it as a simple fact of life but also transformed his work ethic allowing him to become even more productive than ever. I conclude from this that we must be helped by others by them recognizing our talent, as well as learning how to turn roadblocks into ramps meaning that instead of stopping we lift off as a result of them.
https://medium.com/history-of-yesterday/the-smartest-person-in-the-world-that-you-have-never-heard-of-6b33ae3a4819
['Calin Aneculaesei']
2019-07-26 20:36:01.162000+00:00
['History', 'Astronomy', 'Mathematics', 'Science', 'Physics']
The next era of Bug Bounty at Pinterest
Devin Lundberg | Pinterest Tech Lead, Product Security When a security researcher discovers a bug in a piece of software, the responsible thing to do is inform the company so they can fix it. And so platforms like Pinterest need to provide clear and actionable programs, typically with rewards or recognition, for those with valid reports. For us, that’s come in the form of responsible disclosure policies, which we’ve evolved over the years. We work with Bugcrowd to manage the program and integrate with their existing community of researchers, across a variety of Pinterest properties including pinterest.com subdomains (such as help.pinterest.com), mobile apps, browser extensions and open source projects. Since 2015 we’ve given monetary rewards (or “bounties”) to researchers and have continually raised those rewards. We leverage Bugcrowd’s vulnerability rating taxonomy to fairly assign rewards based on severity, which allows for a reasonable expectation of reward from researchers and helps to focus attention on the most impactful types of bugs. The program has been a big success. Hundreds of researchers have participated in the program. We’ve rewarded more than $35,000 to more than 150 valid non-duplicate submissions, and the highest single reward was $2,500. Today we’re once again announcing increased rewards for all tiers of bugs to show our continued commitment to responsible disclosure and researchers. If you’re a security researcher or developer interested in participating in our program, read the brief and terms on Bugcrowd, and join us.
https://medium.com/pinterest-engineering/the-next-era-of-bug-bounty-at-pinterest-a646a72a0cd3
['Pinterest Engineering']
2018-11-13 19:39:08.942000+00:00
['Engineering', 'Security', 'Spam', 'Bug Bounty']
How Many Veterans Have to Work on Veterans Day?
I especially want this for Veterans who work retail. Imagine being told ‘thank you for your service’ by someone enjoying the day off in your name while you have to ring up their purchases or bring them food or serve them drinks. It’s obviously not the worst thing in the world but it just doesn’t seem right. And if small businesses can’t afford to pay Veteran employees for the day off they can apply to the civil forfeiture charitable trust for a scholarship or reimbursement, whichever they want to call it. But the bureaucracy of any paperwork cannot fall to the Veteran. Or said Veteran should also be given a $25–50 gift card to a store of their choosing and a free “foofy” coffee, as my favorite Veteran calls any non-Folgers type beverage with espresso. And small-town salute to Buster’s Main Street Cafe in Cottage Grove, Oregon for offering a free meal to Veterans every year on Veterans Day. They are one of the few non-chain strip mall-type restaurants that take this financial hit in order to honor Vets with great food. My dad will be there today with some of his buddies from the 104th Infantry Division. My dad’s retirement ceremony after 36 years of military service And so, dear Veterans, I’d like to thank you for your service. But not in any way that requires you to work or interact with strangers in any way you don’t choose. It is my fervent hope that you can enjoy Veterans Day from your couch or road trip or Buster’s or wherever you choose. And if you’d like to consider my other propositions should I ever be crowned Queen of Cascadia I have a few ideas that keep me up at night around election season every year. Your typical Oregon ballot runs amok with bond measures to repave streets and finance basics that our taxes should already be funding — public safety, libraries, schools, and fire districts. But even those perennial asks are outnumbered by the slate of constitutional amendments. Few will suffer the malleability of statutory measures. They’re too easy to change and people want staying power. Columbus Day will be changed to Indigenous People’s Day and schools can celebrate traditions of local tribes as an interactive way for students to learn about the respective history of tribes indigenous to the students’ areas. Election Day will be a federal holiday until all states have mail-in ballot options. Approval voting or ranked-choice voting for candidates. I can’t decide which so this idea might itself require its own ranked-choice vote. Civil forfeiture in all jurisdictions with approval voting for how all seized assets are redistributed. I suggest allocations should be made from endowment dividends to NGOs and public services both. Motorists should be fined no less than $2,500 for driving with dogs in open-bed trucks. Chaining your dog to some part of a truck when it’s still open-bed would still be illegal. Drivers of transport vehicles should be fined no less than $2,500 for any uncovered material that could fly out of or off the vehicle — sand, gravel, bark chips, mattresses, whatever. If we’re all required to affix that road-soiled red flag to anything that extends past the length of the vehicle, drivers should also ensure that whatever they’re driving stays in their vehicle and not smashed/sprayed/splashed/chipped all over the windshield(s) behind them. No Christmas decorations or music before Thanksgiving. Businesses violating that will be fined $500 for each day that violates that and all money will be donated to the civil forfeiture charitable endowment. In the bloated fiefdom that is the US Congress, I have long had a dream that won’t pay off the federal deficit but it will at least quit metastasizing it. My own congressman, Rep. Peter DeFazio, long ago declined the self-serving pay raise congress gave itself in one of the ballsiest heists that ever took place in broad daylight. He diverts that money to college scholarships for Veterans. Swoon. But my dream goes a leap beyond. I think legislators should only get paid whatever the current median wage is in their district. And they absolutely should not earn their exorbitant salary for life. They will be enrolled in PERS and will be eligible to collect that after they are vested. They, of course, will also be eligible to earn whatever social security is left over. But they will not be getting gold-plated health insurance. They can pay the same market prices the proletariat does for coverage until we have Scandinavian-style universal health care.
https://medium.com/with-liberty/how-many-veterans-actually-get-to-celebrate-veterans-day-fc0acd0c59b2
['Heather M. Edwards']
2020-06-08 06:38:51.998000+00:00
['Politics', 'Society', 'Culture', 'Veterans', 'Elections']
Karen’s Weekly Technology Hits Review
Artificial Intelligence in bins? I know of many council and supermarket recycling containers that would benefit. Some people can’t even be bothered to put the correct items in the corresponding bin! Also, Fife, the county I live in, doesn’t recycle plastic bags. I want to do everything I can to ensure packing gets recycled, sadly not everyone thinks the same way. The Issue of Contaminated Recycling According to CNN, throwing non-recyclable items (i.e., coffee cups, cardboard boxes) in with recyclable items in the recycle bin renders the recycling unusable. I like two of Audrey Malone’s articles so much I thought you lovely readers might enjoy them too. She asks and answers her title question. I agree with her preference. What do you think? Would you? The Elderly, Technology and Artificial Intelligence We have all seen it, either with our parents or grandparents. The fact that many people in the older age groups do not like technology and are not technologically savvy. My mom, still won’t embrace technology. It’s a wonder she even has a touch-tone phone. But, this trend has changed quite a bit in recent times and there are many older people who know how to use technology well. So, seniors using robotic assistance shouldn’t be too scary — right? My dad admires Elon Musk so much he insists on sharing the incredible man’s achievements with me on a regular basis. I admit I wouldn’t have gone looking for the information, but I am very impressed with the man’s work ethic, skills, and ethos. I enjoyed reading about Mr Musk from Vivek Naskar’s point of view. Undoubtedly Musk is an inspiration for Students, Programmers, Technology hobbyists, and Entrepreneurs. He reinvented and reengineered multiple industries in a short span of time. For a new writer on Medium, Vivek does have rather appealing titles and then delivers the goods with his header images and texts. He got my attention with both of his titles, shared here with you. I reckon this writer is one to follow for his excellent life hacks. Well, I bet I got you there and made you click on this article. But it is a genuine piece of article that really stops you apologizing unnecessarily and makes you feel less guilty. Well, truth to be told, many people consider “sorry” as the safe word which when used, makes them immune to the prying eyes of the people. But are you that person where you can hear yourself saying “sorry” over and over again for basic and normal requests which really don’t need an apology? Guess what, you are annoying people. Stuart Englander drew me in with his title too. Who doesn’t like films, right? I’m delighted to watch a film again if I can’t remember the ending. This happens after about a year for me. As a teenager living in my first bedsit, I watched old black and white movies on my black and white TV on Saturday afternoons. I’m not a big fan of monochrome these days when there are so many full technicolour to choose from, however, there are some amazing Film-Noir options. I hadn’t watched the film Stuart is talking about until a year or so ago. It was brilliant and I will never forget the ending, but I might watch it again regardless. The genre that’s known as ‘Film Noir’ came to prominence in the post-war era with the emergence of stars like Humphrey Bogart and Edward G. Robinson. Some still argue that film noir isn’t a true genre at all. Regardless of its name, however, there is one film that blazed a trail of artistry in cinema that can never be denied. I hope you enjoyed this week’s selection of technology-related stories as much as I did.
https://medium.com/technology-hits/karens-weekly-technology-hits-review-ec65e74878b3
['Karen Madej']
2020-12-28 13:47:23.300000+00:00
['Film Noir', 'Advice', 'Technology', 'AI', 'Technology Hits']
Got To Give It Up: 15 Songwriters And Producers That Shaped The Motown Sound
Rashad Grove Photos: Motown Records Archives Emanating from Detroit, aka Motor City, the Motown sound forever transformed the landscape of soul and pop music. For the last 60 years, guided by the vision of founder Berry Gordy, Motown’s music has transcended generations and left an indelible imprint upon culture all over the world. While the label created superstars like Diana Ross And The Supremes, Four Tops, The Temptations, Gladys Knight And The Pips and a plethora of others, the major forces behind the tremendous success of “Hitsville USA” were the songwriters and producers who worked behind the scenes to give the world “The Sound Of Young America.” Here are 15 songwriters and producer who shaped the Motown sound. Listen to the best of Motown on Apple Music and Spotify, and scroll down for the 15 songwriters that shaped the Motown sound. Got To Give It Up: 15 Songwriters And Producers That Shaped The Motown Sound 15: Ivy Jo Hunter Ivy Jo Hunter is one of the unsung heroes of Motown. Trained in orchestral music, he began as a session player, then became a principal musician in the Motown house band before settling in as a songwriter and producer. He co-wrote ‘Ask The Lonely’ and ‘Loving You Is Sweeter Than Ever’ by Four Tops, Martha And The Vandellas’ anthem ‘Dancing In The Street’ and The Spinners’ ‘I’ll Always Love You’, and he produced the 1968 Top 40 hit single ‘You’ for Marvin Gaye. As an integral part of the Motown machine, Hunter accomplished much with little fanfare. Check out: ‘Dancing In The Street’ 14: Clarence Paul Clarence Paul is credited with mentoring “Little” Stevie Wonder, but he was also a writer and producer on some legendary Motown songs. He composed ‘Hitch Hike’ for Marvin Gaye and co-composed the energetic ‘Fingertips’, which, as the live recording ‘Fingertips — Part 2’, Stevie Wonder took №1 on the Billboard Hot 100, becoming the youngest artist ever to top the chart. Paul and Wonder began a fruitful songwriting partnership, resulting in ‘Until You Come Back To Me (That’s What I’m Gonna Do)’ and ‘Hey Love’, and he produced Wonder’s version of Bob Dylan’s ‘Blowin’ In The Wind’, which went to №1 on the R&B chart and №9 on the pop charts in summer of 1966. Clarence Paul died in 1995, in Los Angeles, with Stevie Wonder at his bedside. Check out: ‘Hitch Hike’ 13: Harvey Fuqua If Harvey Fuqua did nothing but establish the R&B and doo-wop group The Moonglows, with whom Marvin Gaye got his start, that would have been enough. But Fuqua was instrumental in the early development of the Motown sound. While married to Gwen Gordy, sister of Berry Gordy, he distributed Motown’s first hit single, Barrett Strong’s ‘Money (That’s What I Want)’, on their Anna Records imprint. When Fuqua sold Anna Records to Berry Gordy, he became a songwriter and producer at Motown. Fuqua brought Tammi Terrell to the label and began to produce her classic duets with Marvin Gaye, including ‘Ain’t No Mountain High Enough’, ‘Your Precious Love’, ‘If This World Were Mine’ and ‘If I Could Build My Whole World Around You’. A true pioneer in African-American music, Harvey Fuqua died in 2010. Check out: ‘Ain’t No Mountain High Enough’ 12: Syreeta Wright Syreeta Wright was not only the muse, but the creative partner of Stevie Wonder as the latter was developing into one of the leading masterminds in the music history. Together they wrote ‘It’s A Shame’ (recorded by The Spinners), ‘Signed, Sealed, Delivered (I’m Yours)’ and Wonder’s 1971 album, — the first project on which Wonder had full creative control, and also composed the sophisticated ‘If You Really Love Me’, which entered the Top 10 on the Billboard Pop Charts. Over the course of her career, Wright would continue to work with Wonder; she also made significant recordings with keyboardist extraordinaire Billy Preston and focused on her own solo work until her death in 2004. Check out: ‘If You Really Love Me’ 11: Johnny Bristol A protégé of Harvey Fuqua, Johnny Bristol was a major component of the Motown sound of the late 60s and early 70s. He penned Motown standards such as Gladys Knight And The Pips’ ‘I Don’t Want To Do Wrong’, Jr Walker And The All-Stars’ ‘What Does It Take (To Win Your Love)’ and David Ruffin’s ‘My Whole World Ended (The Moment You Left Me)’. Bristol also holds the distinction of being producer and co-writer of the final singles for the Diana Ross-era Supremes and Smokey Robinson-era Miracles. With The Supremes’ ‘Someday We’ll Be Together’ (1969), and The Miracles’ ‘We’ve Come Too Far to End It Now’ (1972), Bristol gave Ross and Robinson fitting swansongs as they transitioned to solo acts. Bristol later resumed his own recording career, and continued to write and produce until he passed away in 2004. Check out: ‘Someday We’ll Be Together’ 10: Frank Wilson When Motown moved Detroit to Los Angeles, writer/producer Frank Wilson was an integral part of the transition, joining Motown in the mid-60s at its newly opened office on the West Coast. Wilson wrote several hits, among them ‘Chained’ (for Marvin Gaye) and ‘You’ve Made Me So Very Happy,’ (Brenda Holloway), which, two years later, become a gigantic hit for Blood, Sweat And Tears. As The Supremes’ music began to reflect changes in society, Wilson penned ‘Love Child’, which soared to №1 on the Billboard 100. He composed ‘All I Need’ for Four Tops’ thematic Still Waters album and also handled production on The Supremes’ first albums of their post-Diana Ross era. Wilson continued his hot streak during the 70s, penning big hits with Eddie Kendricks (‘Keep On Truckin’’, ‘Boogie Down’, ‘Shoeshine Boy’), which took Motown into the disco era. After leaving the label in 1976, Wilson became a born-again Christian. He continued to write and produce R&B and gospel music until his death, in 2012. Check out: ‘Keep On Truckin’’ 9: William “Mickey” Stevenson Every great record label needs an A&R person with an ear for songwriting and producing. At Motown, Mickey Stevenson was the man for the job. After his audition as a singer didn’t go well, Stevenson took Berry Gordy up on his offer to be the label’s A&R man. One of the most important brains behind the Motown operation, Stevenson oversaw classics like ‘Dancing In The Street’, which he co-wrote with Ivy Jo Hunter and Marvin Gaye; ‘It Takes Two’, co-written with Sylvia Moy for Gaye and Kim Weston, Stevenson’s former wife; ‘Ask the Lonely’, for Four Tops; Jimmy Ruffin’s ‘What Becomes Of The Brokenhearted’; and Gaye’s ‘Stubborn Kind Of Fellow’, among others. Of all his noteworthy accomplishments as a songwriter and producer, Stevenson’s greatest feat may have been establishing the Motown house band, the legendary Funk Brothers. Check out: ‘What Becomes Of The Brokenhearted’ 8: Lionel Richie Lionel Richie came to Motown as a member and primary writer/producer of the funk band Commodores, and was as comfortable writing ballads (‘Just To Be Close’, ‘Easy’, ‘Three Times A Lady’) as he was funk hits (‘Brick House’, ‘Lady (You Bring Me Up)’, ‘Too Hot Ta Trot’). His duet with fellow Motown superstar Diana Ross, ‘Endless Love’, is one the most beloved ever written, and sparked Richie’s solo career. After leaving Commodores, Richie catapulted into superstardom in the 80s. In 1982, the first single from his eponymous debut album, ‘Truly’, topped the Billboard Hot 100, while ‘You Are’ and ‘My Love’ both reached the Top 5. His next album, Can’t Slow Down, was even bigger, producing two №1 hits, ‘All Night Long’ and ‘Hello’, and earning him a Grammy for Album of The Year. During the 80s, Richie was the most decorated of Motown’s songwriters and producers. Check out: ‘Too Hot Ta Trot’ 7: The Corporation After Holland-Dozier-Holland left Motown, label founder Berry Gordy assembled a team of writers and producers, because he did not want any more “backroom superstars”. Gordy, along with Alphonso Mizell, Freddie Perren and Deke Richards, became known as The Corporation, and their first project was to create material for Motown’s newest signees, Jackson 5. The Corporation (whose members were never individually billed) came out of the box smoking in 1969 with the №1 hit ‘I Want You Back’, and followed it up with ‘ABC’, ‘The Love You Save’ and ‘I’ll Be There’, in 1970. A short-lived ensemble, The Corporation disbanded in 1972, when Hal Davis took over producing duties for Jackson 5. Check out: ‘I Want You Back’ 6: Marvin Gaye Known as the “Prince Of Motown”, Marvin Pentz Gaye became a superstar solo act, but his work as a key songwriter and producer for Motown should not be overstated. He cut his teeth writing ‘Beechwood 4–5789’ for The Marvelettes, in 1962, and ‘Dancing In The Street’ for Martha And The Vandellas. For The Originals, who sang background on some of Motown’s biggest releases, Marvin wrote and produced the doo-wop-influenced singles ‘Baby I’m For Real’ (1969) and ‘The Bells’ (1970), both of which reached the Top 15 on the Billboard Pop charts. Reworking an original idea by Renaldo “Obie” Benson, Gaye developed the classic song ‘What’s Going On’. On this masterwork of the same name, Gaye continued to develop his songwriting, composing ‘Mercy, Mercy Me (The Ecology)’ and ‘Inner City Blues (Make Me Wanna Holler)’. Shortly after, ‘Let’s Get It On’ became a №1 hit for Gaye in 1973, and the parent album was both commercially successful and revered by critics. Throughout the 70s, Gaye’s songwriting and producing resulted in further landmark works such as ‘I Want You’ , ‘Got To Give It Up’ and countless others. Check out: ‘Inner City Blues (Make Me Wanna Holler)’ 5: Stevie Wonder Child prodigy “Little” Stevie Wonder would grow into the genius the world knows as, simply, Stevie Wonder. After working as an apprentice to The Funk Brothers and being mentored by Clarence Paul, Wonder was ready to express his virtuosity as songwriter and producer. He co-wrote ‘Tears Of A Clown’ in 1970, helping to give Smokey Robinson And The Miracles their only chart-topping single. That same year’s ‘Signed, Sealed, Delivered (I’m Yours)’ was Wonder’s first self-produced hit, peaking at №3 on the US Pop chart. Stevie entered the 70s with his full artistry on display, composing ‘It’s A Shame’ for The Spinners. He also co-wrote and produced Syreeta Wright’s first two albums In 1972, Wonder would embark on his “classic period”, during which he released Music Of My Mind, Talking Book (both 1972), Innervisions (1973), Fulfillingness’ First Finale (1974) and his magnum opus, (1976). He continued to score hits throughout the 80s. Though his work rate slowed in the decades since, Stevie Wonder remains the consummate Motown songwriter and producer. Check out: ‘Signed, Sealed, Delivered (I’m Yours)’ 4: Ashford and Simpson Hailing from New York City, Ashford and Simpson brought an East Coast sensibility to Motown. Joining the label as staff writers in 1966, the couple were assigned to Marvin Gaye and Tammi Terrell, and they wrote and/or produced all but one of the duo’s late-60s singles, including some of Motown’s best duets, such as ‘Ain’t No Mountain High Enough’, ‘Your Precious Love’, ‘Ain’t Nothing Like the Real Thing’ and ‘You’re All I Need To Get By’. Their winning streak continued through the 70s, when Ashford and Simpson wrote and produced almost all the songs on Diana Ross’ self-titled debut album, among them the gospel-inspired ‘Reach Out And Touch (Somebody’s Hand)’ and Ross’ grandiose revision of ‘Ain’t No Mountain High Enough’. On her Surrender album they penned ‘Remember Me’, and they also contributed The Boss’ disco-flavoured title track. After a partnership in both music and marriage that lasted almost 50 years, Nick Ashford passed away in 2012. Check out: ‘Ain’t Nothing Like The Real Thing’ 3: Norman Whitfield Through grit and determination, the incomparable Norman Whitfield ascended through Motown’s ranks and led the label into the 70s with his interpretation of psychedelic soul. Starting in the quality-control department, he went on to co-write Marvin Gaye’s hit ‘Pride And Joy’, The Marvelettes’ ‘Too Many Fish In The Sea’ and The Velvelettes’ ‘Needle In A Haystack’. Whitfield replaced Smokey Robinson as the main producer for The Temptations in 1966, when his smash hit ‘Ain’t Too Proud To Beg’ outperformed Robinson’s ‘Get Ready’ on the pop charts. Alongside frequent collaborator Barrett Strong, Whitfield had an unprecedented run producing some of The Temptations’ greatest songs, including ‘(I Know) I’m Losing You’, ‘Cloud Nine’, ‘I Can’t Get Next to You’, ‘Ball Of Confusion (That’s What The World Is Today)’, ‘Just My Imagination (Running Away with Me)’ and ‘Papa Was A Rollin’ Stone’. He also crafted ‘War’ for Edwin Star and ‘I Heard Through The Grapevine’, which Gladys Knight And The Pips tackled in 1967 before Marvin Gaye made it a crossover smash the following year. Whitfield was the most prominent producer at Motown until his departure in 1975. He passed away in 2008, leaving a legacy of unforgettable music. Check out: ‘Ain’t Too Proud To Beg’ 2: Smokey Robinson Bob Dylan called him “America’s greatest poet”, and William “Smokey” Robinson has been the poet laureate of Motown since the beginning. As the lead vocalist of The Miracles, Smokey composed some Motown’s best-known early material, including ‘Shop Around’ , which became the label’s first million-selling hit record, ‘You’ve Really Got A Hold on Me’, ‘I Second That Emotion’ and ‘Baby, Baby Don’t Cry’, as well as co-writing the group’s only №1 hit during their Robinson years, ‘The Tears Of A Clown’. All in all, Smokey composed 26 Top 40 hits for The Miracles. Additionally, he’s also responsible for ‘My Guy’, which Mary Wells took to the top of the charts; ‘The Way You Do The Things You Do’, ‘My Girl’, ‘Since I Lost My Baby’ and ‘Get Ready’, all gifted to The Temptations; and ‘Ain’t That Peculiar’, which became Marvin Gaye’s second №1. Even later in his career, with hits like ‘Quiet Storm’ and ‘Crusin’’, his pen was still poetic. Smokey remains an ambassador and undoubtedly one of the key architects of the Motown sound. Check out: ‘My Guy’ 1: Holland-Dozier-Holland It could be argued that Holland-Dozier-Holland are the most prolific songwriting and production team in pop music’s long history. Over the course of five years, from 1962 to 1967, the trio wrote, arranged and produced many of the compositions that helped to establish the Motown sound. Lamont Dozier and Brian Holland served as the composers and producers for each song, while Eddie Holland wrote the lyrics and arranged the vocals. The result was Motown magic. H-D-H composed 25 №1 hit singles, such as Martha And The Vandellas’ ‘Heat Wave’ and Marvin Gaye’s ‘How Sweet It Is (To Be Loved By You)’, and they also turned out classics for Four Tops (‘Baby I Need You Loving’, ‘Reach Out, I’ll Be There’) and The Supremes, penning ten out of the latter group’s 12 №1 his, including ‘Baby Love’, ‘Stop! In The Name Of Love’ and ‘You Keep Me Hangin’ On’. Without question, Holland-Dozier-Holland were the engine that drove the Motown machine to success. Check out: ‘Baby Love’ All of Motown’s chart-topping hits feature on the newly-expanded 11CD edition of the Motown: The Complete №1s box set. Buy it here. Join us on Facebook and follow us on Twitter: @uDiscoverMusic
https://medium.com/udiscover-music/got-to-give-it-up-15-songwriters-and-producers-that-shaped-the-motown-sound-c390adde129f
['Udiscover Music']
2019-11-08 05:13:36.061000+00:00
['Music', 'Culture', 'Lists', 'Soul', 'Pop Culture']
What Women Want You to Know About Pregnancy Loss
What Women Want You to Know About Pregnancy Loss Mothers share the things that surprised them, grieved them, and helped them during the pain of miscarriage. Photo by Andrey Zvyagintsev on Unsplash I love Chrissy Teigen. No, I don’t know her. She doesn’t know me. But she is smart, and funny, and real. And her suffering right now is a parent's worst nightmare, a loss that is wordless. Her photos and words are raw, beautiful in their humanity, and timeless. It’s not easy to share this pain with the world at large, but I’m so, so thankful that she chose to. Because we need to normalize talking about this, and recognizing pregnancy loss as being REAL loss. Women need to be able to share their stories, to connect with one another, to speak about these things they hold inside. If you know more than 4 women, you know someone who’s experienced a pregnancy loss. I have never suffered a known miscarriage, but I’d like to use my space here to help other women feel less alone. You see, if you know more than 4 women, you know someone who’s experienced a pregnancy loss. You might not know it, but it’s extremely common. I reached out to friends and women in my community to ask them: what do you wish people knew about pregnancy loss? What do you want people to understand? This is what they said. Miscarriages are common. My first pregnancy ended in a miscarriage, and I didn’t know that anyone I knew had gone through the same thing. Once I shared what I was going through, friends and family supported me and shared that they had also experienced pregnancy loss. I try to be open about my experiences now so that other women don’t feel alone. If you’re experiencing miscarriage, you are not alone. Though the most commonly recognized statistics show that around 1 in 4 pregnancies end in miscarriage, some studies have shown that the rate of pregnancies ending in full-term birth is much lower. As a culture, building conversation and support for parents experiencing these losses is so important. There is no comfort in “at least.” The worst thing people told me was “at least you weren’t that far along”. Yes, it’s true. I wasn’t far along, but we had been trying to get pregnant for over a year, my heart was absolutely broken. I lost a baby, even though it was early. I’m thankful that I have my rainbow baby. But I still have days where I break down and I’m not okay thinking about my loss. Nothing can explain that heartbreak. Any statement beginning with “at least” is of no comfort to parents experiencing the loss of their baby. Some people offer words like at least you know you can get pregnant, at least you were only three weeks along, or at least you hadn’t had time to get excited yet as attempted silver linings. The reality is that all these words do is downplay people’s pain and ignore the pain, trauma, and deep sense of loss people might be feeling. My baby was meant to be, their loss was not. I know what I didn’t want people to say. “ It’s ok you can just have another one!” Or “At least you still have some at home” or “It wasn’t meant to be.” But I wanted THAT baby and no other baby can replace that baby I lost. I just wanted them to simply say they were sorry for my loss and pain. I wish people knew you would cycle through grief and the pain would come and go at the most random times. Each time I passed what would be my due date was so painful! In times of grief, some people try to offer words of something along the lines of everything happens for a reason. Some miscarriages do happen for a reason, but even knowing why the pregnancy wasn’t viable doesn’t mitigate the heartbreak of losing a baby that was wanted and loved. These losses matter. I couldn’t stop feeling guilty. We have suffered a few losses, all early, all hard. The honest to goodness most helpful thing was just knowing other people had gone through this, it helps alleviate the unavoidable sense of guilt. I cried with Chrissy yesterday, a complete stranger I have no connection to - I am so grateful she shared this story. I am so grateful she is being unapologetically sad, we need more of this rawness in our social media. From outside the situation, it may be obvious that most women experience pregnancy loss by no fault of their own. From the inside, it’s not so cut and dried. Feelings of guilt are extremely common in women who have miscarried. Humans seem hard-wired to search for the whys, especially when bad things happen. Understanding in your mind that something is not your fault is not always the same as understanding it in your heart. I felt like something was wrong with me. I watched so many friends, coworkers, and family members get pregnant and it seemed like it was so easy for them. They didn’t wait to announce that they were expecting and none of them ever said anything about having experienced a loss. I couldn’t understand why they got to carry babies and I didn’t, and it made me feel hopeless and alone. Losing a pregnancy is not abnormal. Women experiencing pregnancy loss are not outliers, and much of the time are able to go on to have full-term pregnancies. Because people don’t talk about it, it can be easy to feel like you are experiencing something other women aren’t, but the reality is that most of the time, nothing is wrong with you. So many women’s experiences highlight the need for more and louder conversation about this. Early loss is still loss. Last summer I knew I was pregnant for about a week- I was technically around 5 weeks pregnant when I miscarried. It was awful. Just the hopelessness and sadness. I was so anxious when I started spotting, my husband tried to assure me that it was normal and not to worry. The whole process just showed me how precious pregnancy is. Getting pregnant and staying pregnant can be so difficult. I knew that 1 in 4 pregnancies result in a miscarriage but it was so easy for us the first time. I can’t imagine the devastation of being halfway through and losing your baby. I’ve been super sad for her since I read her post last night. It doesn’t matter how far along a pregnancy is when it ends. Having a child is a big decision, and it’s one that many people plan for and look forward to and try for years. No matter how long it’s been since you found out you were pregnant, for many women the excitement, hope, and love for that growing kid are immediate and overpowering. The hardest part was the next pregnancy. My first pregnancy was an early loss (6 weeks). The loss itself was hard, because only about 5 people had even known I was pregnant There weren’t a lot of people to talk to about it, and none that had also experienced a loss. I had a gal at work who was 7 months pregnant at the time and it was hard to watch her happiness through my grief. But honestly, the hardest part was when we got pregnant again. All that joy and anticipation of a baby was sucked out and I was so nervous and anxious that it would happen again, that I didn’t get to enjoy it. It took a long time for either my husband or I to be comfortable with the pregnancy, and even then we were both reserved about it. The further you get from a loss, the more you learn to live with the pain. Getting pregnant again can be a huge trigger for women who have experienced pregnancy loss. It changes the landscape of feelings and fears related to carrying a baby. Remember that the numbers apply to subsequent pregnancies — at least 25% of pregnant women have experienced a loss in their past. Sometimes the best thing to do is just listen. I lost my daughter at 22 weeks 3 days followed by a miscarriage at 15 weeks, then 9 weeks. I felt so bad for her when I saw her post on Instagram. people sometimes say the worst things. I’ve heard it all. It’s better to say nothing, then hurtful comments. If you don’t know what to say to someone, just say you’re sorry for their loss and tell them that you’re there for them in any way they need. Don’t avoid them. Also don’t pretend that it didn’t happen. Let them talk about their baby and what they went though. Sometimes the best thing to do is just listen. I think it’s human instinct to want to help when someone you care about is in pain. Many times, our brains try to do this by offering words we might find comforting or giving advice. It’s hard to just sit and listen, and offer support in silence, but often it’s the most helpful and supportive thing you can do. You have to feel your pain. I can’t say that I wish I knew anything, because it’s been over a year and I’m still processing and waiting for someone to tell me something I wish I knew. It all hurts the same everyday. The best thing to do is FEEL your pain, in the moment, and as long as you need to. Go through it and grieve as long as you need to, but that’s something I just couldn’t do. I ignored the pain and numbed it multiple ways. I didn’t care about anything else, I just wanted my baby. (All 3 of them) and up until this day still trying to understand why I wasn’t given that opportunity. I have a 5 year old that constantly asks me when she’s able to have a brother/sister and it just breaks my heart. I have a friend who lost her husband when her daughter was under two years old, and she has reminded me so many times that you can’t go over it or around it, you just have to go through it. Pain and grief don’t go away, and if you try to ignore them it’s like throwing a blanket over a pile of unfolded laundry. The pile may look different, and you may let yourself believe you’ve forgotten it, but it’s still there. People grieve differently. Suffering a loss as a couple is different from going through one alone. My boyfriend didn’t want to talk about the miscarriage at all, and when he was turning inward, I wanted his support. It was hard to feel like he was moving on when I couldn’t. I realize now that we just had to feel our grief differently. Not talking about something isn’t the same thing as not feeling it. People experience feelings in different ways, and just because their method of coping is different doesn’t mean they’re not having the feelings. Whether it’s within a couple or different women or family members experiencing loss, the grief won’t look the same for everybody. It’s impossible to make assumptions about what someone is going through. Not all miscarriages are the same. People make assumptions when you tell them you had a miscarriage, but mine have all been really different. One of them I realized when I started bleeding and having horrible painful cramps. But the next one, everything felt normal but they told me at my 12 week ultrasound there was no heartbeat. Pregnancy loss is a varied experience. Some women lose their pregnancies before they even found out about them, but realize they’re not just having a normal period. Others lose their babies much further along, at 12 or 18 or 22 weeks. Some women bleed or spot throughout their pregnancies, some don’t. When you find out that someone has experienced a miscarriage, you can’t assume that you know what they’ve gone through physically. Miscarriage is messy — physically and emotionally. I have had three miscarriages throughout my life. Each one was difficult in its own ways and circumstances, but the first was very traumatic. I was 14 years old, in a deeply religious family, and miscarried at between 3–4 months along. My family did not know of the pregnancy and I had no medical or emotional support. The pain and guilt of that loss stayed with me for years. Even to this day, my heart still mourns the loss of Malachi. Not having proper and healthy emotional support for processing that loss broke me in a way that left a huge scar on my heart. I feel that it gave so much weight to the loneliness felt with my future miscarriages. Like they were a secret to be hid and no one should see. I held a guilt for feeling somehow irresponsible for letting it slip away. Like feeling that the miscarriages were due to my failure and neglect somehow. Even though I cognitively understood that miscarriages are a very common occurrence, those feelings of responsibility overshadowed them. It wasn’t just the feeling of having “a loss”, it was a more active feeling that “it was a loss because I’d lost it”. Does that make sense? Miscarriages are not discussed in our culture nearly enough. They are often kept in the shadows and the weight is all placed on our broken hearts to carry. Miscarriages are not discussed in our culture nearly enough. They are often kept in the shadows and the weight is all placed on our broken hearts to carry. In addition to the support needed for the emotional affects, there also needs to be education about the lingering physical affects as well. Miscarriages are not a simple processes that happens quickly and then its all done. Its not a neat little package you can tie up with a bow. It can take days for the miscarriage to physically happen but also your hormones go completely cattywampus for a long time and this greatly affects your mental state. It wasn’t until my third miscarriage that I even understood that was a part of it. Once I had just that little bit of knowledge, it gave me such a better perspective and compassion for myself. It gave validation to my reality and with that came a power to make it past that point in time. Not all miscarriages are grieved. I was around 6 or 7 weeks pregnant when I miscarried, and I didn’t even know I was pregnant. I was not trying to conceive, and neither my husband nor I was ready for a kid. We didn’t grieve our loss, it was just a thing that happened, and that’s okay too. Some women do not experience immense pain or grief following a pregnancy loss. That doesn’t mean there is anything wrong with them. Everyone has different things going on in their lives, different circumstances, different hopes, and dreams. We experience things differently, that’s one of the things that makes us human. Whether you are devastated or relieved or blasé or sad, you are allowed your feelings. Pain changes, but it never goes away. This is a topic that is not talked about nearly enough. It’s every parents worst nightmare. I think the hush-hush attitude, and others not wanting to speak/ask about a lost child can be a heavy burden to carry. It’s a lifetime of pain that never goes away. It does get easier, but it’s always there. One mother shared that the people saying there would be other babies was the hardest thing for her to cope with: This has me so emotional right now. It’s the worst experience of many people’s lives, mine included. Even though I have a beautiful rainbow baby now, I still mourn my loss. The worst thing people can say is that there will be other babies. My own parents said that to me even though they experienced a loss before my oldest sister was born. Not long after mine, I came across this podcast episode about how grief comes in waves, and you never know when another wave will come and knock you down. How you can think you’re finally doing better and moving along in life, then another wave comes. There is so much truth in this. Even though it’s been almost 2 years, seeing Chrissy’s post cause another wave. A TED talk I heard shortly after was also about grief and that you do not move on from it, you move forward. That is so important. You never forget that baby, they’re always in your heart. Everything we experience in our lives makes us who we are. Losing a pregnancy becomes a permanent part of your story, the same way having a child does. Grief may shift, fade, or move further from the surface of everyday life, but it never goes away completely.
https://medium.com/fearless-she-wrote/what-women-want-you-to-know-about-pregnancy-loss-c9437adf8482
['Rachael Hope']
2020-10-16 15:03:07.497000+00:00
['Health', 'Grief', 'Pregnancy', 'Parenting', 'Women']
The Worst Enemy of Progress (2/5)
Photo by Jp Valery on Unsplash In my previous article on progress we’ve seen that fashion and marketing are not a friend of progress, because they bind existing know-how into doing useless things. But why do companies see fit to create hype? Why on earth would two big corporations go to the length to create programming languages and to create a hype around them? And why would entrepreneurs generate buzzwords like WEB2.0 and sell their business plans based on those? Let’s take those examples one by one. The company’s (Sun Microsystems, if you still remember that name) explanation for the programming language hype was: “because it is cross-platform” and “because it is simple and powerful”. Simple? How long does it take a newbie to get the signature of “public static void main(String[] args)” right? And to remember to wrap it in a “public class Foo”? (remember, there was no eclipse or netbeans when “java” started, no auto-completion etc.) A Foo that is also in it’s own, properly named file? And that he has to rename the file to Foo.java (from foo.java or FOO.JAVA) when he copies the file to any real file system? Now why didn’t they, for example, take a sub-set of C++ and modify it to produce byte code similar to java’s? It’s not that hard to change the assembler part of an existing machinery. Or a language similar to Pascal or Delphi, if they wanted a “stricter” language? Why invent a language with a longer (and therefore slower to type => more typo-prone) syntax that had, at the beginning, any number of shortcomings to compensate for the benefit of the garbage collector? (Oh, by the way, pretty much all scripting languages where you can dynamically create objects and reference them have a garbage collector - even Visual Basic.) Or why didn’t they use an existing runtime, like that of aforementioned LISP? I’d say, it suspiciously looks like the real reasons were - because they wanted to OWN a programming language (tm). - because they specifically didn’t want new software that would compile with existing compilers out there, but wanted to bind people to their platform. - because if you own a programming language, you can sell all kinds of things associated with it: — lessons — books — software services — additional software libraries (…”enterprise editions”…) - Also, at the beginning you are the only one who can. Since you invented it, and you own it, and you don’t publish the specs until way later when other people start to write books and libraries for your language based on their experience. - At which point you simply create a new version that has new features that are not backwards-compatible to the old one, and start selling new books and libraries for that one. - rinse, repeat, ad nauseum. And how do you bait people into it? - promise “cross-platform” (“compile once, run anywhere”), since this is a “known problem” in the software industry, and it seems like the existing solutions to the problem are — still — a lot less well-known. - promise “less error-prone” and “faster software development” through “garbage collection” and “type safety”. (The solution of using a sub-set of C++ where you make certain operators illegal and that compiles into a byte code would, by the way, do the very same thing, but allow you to compile the stuff with existing compilers, and allow existing C++ programmers with >10–15 years of experience to program on the new platform with no adjustment time. And this would be bad… how?). Summary: the motivation is to create a new, previously non-existing market. To earn money. In other words, greed. And since Microsoft EXCELs at greed, they simply had to follow that trail with .NET. Promising basically the same things. And of course, the motivation to put IT-Buzzwords (WEB2.0, AJAX, XML, …) on a business plan is exactly the same: attract investors => get money. And of course the entrepreneur would do that. Since the investor wouldn’t budge unless he SEES a buzzword! Or how else should he, who has no idea of the trade, recognize what’s worth investing into? Take AJAX, for example. It looks very much like the word has been invented/hyped solely for that reason — since people have been using the actual technologies behind it for years before the term was invented (a little google search will help), and also, in many configurations used today technically it’s not really AJAX because it transfers the data as YAML or javascript/JSON — because with most types of XML the garbage-to-data ratio is a little too high. Like, at the very least 2:1 or worse. Sometimes much, much, MUCH worse (RosettaNet EDI, I am looking at you!) Even java2 object serialization has a more compact format and is faster to parse… But that’s a different topic. So what does all of that tells us? Similar to the points I’ve discussed in the previous article: - If somebody invents a new toolbox and convinces people that “his is better” while it is actually not, or - if people are forced to re-start their projects using buzzword technologies instead of working technologies because otherwise they don’t get funded, it takes people with know-how working on exiting projects, and binds them learning the new stuff. Stuff that doesn’t actually let them accomplish whatever they were working on any faster. Which means, now they have to add the learning curve of the new stuff to the time their projects need to get done — with no additional benefit. And greed (or some would say, capitalism) is responsible for both of the phenomena. So is greed the worst enemy of progress? Well, that could very well be. But there is a little more to that story. It has to do with human life span, experience, time, and of course, progress. Tune in for the next episode of … Progress Wars!… ;-)
https://medium.com/swlh/the-worst-enemy-of-progress-2-effa17ee2d25
['J. Macodiseas']
2020-10-24 10:23:19.034000+00:00
['Marketing', 'Greed', 'Progress', 'Software Development', 'Waste']
Building A Logistic Regression in Python, Step by Step
Logistic Regression is a Machine Learning classification algorithm that is used to predict the probability of a categorical dependent variable. In logistic regression, the dependent variable is a binary variable that contains data coded as 1 (yes, success, etc.) or 0 (no, failure, etc.). In other words, the logistic regression model predicts P(Y=1) as a function of X. Logistic Regression Assumptions Binary logistic regression requires the dependent variable to be binary. For a binary regression, the factor level 1 of the dependent variable should represent the desired outcome. Only the meaningful variables should be included. The independent variables should be independent of each other. That is, the model should have little or no multicollinearity. The independent variables are linearly related to the log odds. Logistic regression requires quite large sample sizes. Keeping the above assumptions in mind, let’s look at our dataset. Data The dataset comes from the UCI Machine Learning repository, and it is related to direct marketing campaigns (phone calls) of a Portuguese banking institution. The classification goal is to predict whether the client will subscribe (1/0) to a term deposit (variable y). The dataset can be downloaded from here. import pandas as pd import numpy as np from sklearn import preprocessing import matplotlib.pyplot as plt plt.rc("font", size=14) from sklearn.linear_model import LogisticRegression from sklearn.model_selection import train_test_split import seaborn as sns sns.set(style="white") sns.set(style="whitegrid", color_codes=True) The dataset provides the bank customers’ information. It includes 41,188 records and 21 fields. Figure 1 Input variables age (numeric) job : type of job (categorical: “admin”, “blue-collar”, “entrepreneur”, “housemaid”, “management”, “retired”, “self-employed”, “services”, “student”, “technician”, “unemployed”, “unknown”) marital : marital status (categorical: “divorced”, “married”, “single”, “unknown”) education (categorical: “basic.4y”, “basic.6y”, “basic.9y”, “high.school”, “illiterate”, “professional.course”, “university.degree”, “unknown”) default: has credit in default? (categorical: “no”, “yes”, “unknown”) housing: has housing loan? (categorical: “no”, “yes”, “unknown”) loan: has personal loan? (categorical: “no”, “yes”, “unknown”) contact: contact communication type (categorical: “cellular”, “telephone”) month: last contact month of year (categorical: “jan”, “feb”, “mar”, …, “nov”, “dec”) day_of_week: last contact day of the week (categorical: “mon”, “tue”, “wed”, “thu”, “fri”) duration: last contact duration, in seconds (numeric). Important note: this attribute highly affects the output target (e.g., if duration=0 then y=’no’). The duration is not known before a call is performed, also, after the end of the call, y is obviously known. Thus, this input should only be included for benchmark purposes and should be discarded if the intention is to have a realistic predictive model campaign: number of contacts performed during this campaign and for this client (numeric, includes last contact) pdays: number of days that passed by after the client was last contacted from a previous campaign (numeric; 999 means client was not previously contacted) previous: number of contacts performed before this campaign and for this client (numeric) poutcome: outcome of the previous marketing campaign (categorical: “failure”, “nonexistent”, “success”) emp.var.rate: employment variation rate — (numeric) cons.price.idx: consumer price index — (numeric) cons.conf.idx: consumer confidence index — (numeric) euribor3m: euribor 3 month rate — (numeric) nr.employed: number of employees — (numeric) Predict variable (desired target): y — has the client subscribed a term deposit? (binary: “1”, means “Yes”, “0” means “No”) The education column of the dataset has many categories and we need to reduce the categories for a better modelling. The education column has the following categories: Figure 2 Let us group “basic.4y”, “basic.9y” and “basic.6y” together and call them “basic”. data['education']=np.where(data['education'] =='basic.9y', 'Basic', data['education']) data['education']=np.where(data['education'] =='basic.6y', 'Basic', data['education']) data['education']=np.where(data['education'] =='basic.4y', 'Basic', data['education']) After grouping, this is the columns: Figure 3 Data exploration Figure 4 count_no_sub = len(data[data['y']==0]) count_sub = len(data[data['y']==1]) pct_of_no_sub = count_no_sub/(count_no_sub+count_sub) print("percentage of no subscription is", pct_of_no_sub*100) pct_of_sub = count_sub/(count_no_sub+count_sub) print("percentage of subscription", pct_of_sub*100) percentage of no subscription is 88.73458288821988 percentage of subscription 11.265417111780131 Our classes are imbalanced, and the ratio of no-subscription to subscription instances is 89:11. Before we go ahead to balance the classes, let’s do some more exploration. Figure 5 Observations: The average age of customers who bought the term deposit is higher than that of the customers who didn’t. The pdays (days since the customer was last contacted) is understandably lower for the customers who bought it. The lower the pdays, the better the memory of the last call and hence the better chances of a sale. Surprisingly, campaigns (number of contacts or calls made during the current campaign) are lower for customers who bought the term deposit. We can calculate categorical means for other categorical variables such as education and marital status to get a more detailed sense of our data.
https://towardsdatascience.com/building-a-logistic-regression-in-python-step-by-step-becd4d56c9c8
['Susan Li']
2019-02-27 05:26:00.333000+00:00
['Machine Learning', 'Data Science', 'Python', 'Logistic Regression', 'Classification']
Create a Text-to-GIF Animation
2. JSON to Images I started with defining a simple JSON to test if I can render images based on this structure. The JSON should describe: sprites — image URLs for our actors and decorations — image URLs for our actors and decorations scenes — should contain and position actors and decorations — should contain and position actors and decorations frames — should contain actions, like "Ann moves left" ({ sprites: { name: 'http://sprite.url' }, scenes: // scene descriptions { scene_ONE: { entries: /* entries with their sprites and position */ { Ann: { sprite: 'woman' , position: { /* ... */ } } } }, }, frames: [ { scene_name: 'scene_ONE' , actions: [ { target: 'Ann' , action: 'move' , value: {x, y} } ] } , // ...other frames ] }) For the actors, I’ve defined three preset sprites — tree , woman , and man — and added relevant images to the project. Now for each frame, we’ll perform all the actions (move and talk). // for each frame const computedFrames = frames.map(frame => { // clone entries const entries = _.merge({}, frame.scene.entries); // perform actions on the target entry frame.actions.forEach(action => { const entry = entries[action.target]; if (action.type == 'talk') { entry.says = action.value; } if (action.type == 'move') { entry.position = action.value; } }); return { entries }; }); For drawing entry sprites, we’ll surely use Canvas: // draw the entries const images = computedFrames.map(frame => { const canvas = document.create('canvas'); const ctx = canvas.getContext('2d'); frame.entries.forEach(entry => { ctx.drawImage(entry.sprite); // for sprites ctx.fillText(entry.says); // for speech }); // return rendered frame URL return URL.createObjectURL(canvas.toBlob()); }) Canvas can export its contents as a data URL or a blob. We’ll need this to generate .gif later. In reality, the code is a bit more asynchronous: toBlob is async, and all the images should be downloaded before ctx.drawImage . I used a Promise chain to handle this. At this point, I have proved that the images can be rendered as intended: The first rendering! Hurray! So we can move on…
https://medium.com/better-programming/text-to-gif-animation-reactjs-devlog-5378fc70fd1
['Kostia Palchyk']
2020-06-26 14:41:05.429000+00:00
['Animation', 'JavaScript', 'Programming', 'React', 'Web Development']
How Core Values Influence Diversity and Inclusion with Kim Crayton
How Core Values Influence Diversity and Inclusion with Kim Crayton Episode 47 In this episode of Programming Leadership, Marcus and his guest, Kim Crayton, discuss how organizations are shaped by core values, and why values are integral for establishing true diversity and inclusion. Kim dives into some very uncomfortable truths in this episode, pointing out how most organizations are not actually ready for inclusion and diversity, because they are operating with misaligned values that make it impossible for stakeholders to thrive. Kim also explains how businesses can leverage diversity to effectively compete in the information economy, and explains why companies should rethink how they approach risk management. Show Notes Why inclusion and diversity must be the bedrocks of an organization — and why they are essential for competing in the information economy. (2:16) The role that core values play in an organization, and how they are linked to processes, procedures, and policies. (1:43) Understanding shareholder value versus stakeholder value in an organization. (7:06) The core values of the #causeascene community: Tech is not neutral, intention without strategy is chaos, lack of inclusion is a risk management issue, and prioritizing the most vulnerable. (9:48) How most companies lack the diversity to identify the potential for harm — and as a result, they don’t understand harm until it happens. (13:43) Thinking beyond finance when considering risk management (16:38) How income sharing agreements (ISAs) often target and harm — instead of prioritize — people in marginalized communities.(18:50) Defining privilege, underrepresentation, marginalization, variety, and inclusion.(26:56) Redefining capitalism in a way that doesn’t cause harm to people by default. (34:51) Links Transcript Announcer: Welcome to the Programming Leadership podcast, where we help great coders become skilled leaders, and build happy, high performing software teams. Marcus: All right, welcome to the show. This is another episode of the Programming Leadership podcast. I’m Marcus. Today I am very excited to have Kim Crayton with me today. Kim is a business strategist and tech leader coach. And you know that’s a place in my heart I really love to work. So, I’m really excited to hear about what Kim is going to help me think through today. Kim, thank you so much for being on the show. Kim: Thank you for having me. Marcus: And Kim, as my listeners know — and I usually start every episode this way — we were in the middle of a great conversation when I said let’s actually make this the podcast. So, let’s rewind just a little bit. We were talking about taking the idea of core values really sign-on-the-wall, aspirational kinds of ideas and making them valuable, meaningful, ooh, maybe even the word measurable came up. Kim: Not even maybe. Definitely. You cannot manage what you cannot measure. Marcus: That sounds like Drucker. I’ve heard it over and over, and I completely believe it. Let’s back up a little bit. So, you mentioned that sometimes you and your clients start with core values — Kim: Not sometimes. All the time. Marcus: Why is that? Kim: So, and we start with core values because as I said before we started, that most companies in the tech space are not businesses. What they are, are scaled — even if they’re profitable — scaled products or services. A business requires processes, procedures, and policies in place to help you get to a place where you can measure and manage. That’s what a business is. And so, I like to start with core values because — this also is something we talked about before we get started — most of your companies, the majority of your companies, I’m going to say the far majority of your companies aren’t ready to be competitive in the information economy, which necessitates knowledge because it requires inclusion and diversity. Inclusion and diversity are not some add-ons, nice to have, they should be the bedrocks of your organization. We’re no longer in a industrial age where we’re making widgets: when you give someone a manual, you put them on an assembly line, and they have to make widgets that are identical to the neighbors next to them because they go into a million different things. What we are as an information economy — and information is meaningless: it’s input. What you want is the output, which is knowledge. And knowledge is tacit knowledge. It’s that — so you input information, and I do my job; what I come up with is knowledge, which is actually tacit knowledge, which helps me do my job efficiently, which helps me — all those things about my lived experience that makes the job that I’m doing you unique, and allows, if organizational leaders are able to get me to explain — to share that tacit knowledge from my lived experience — then they’re able to use that for innovation, differentiation, and competitive advantage. This is why we need inclusion and diversity because it’s through my lived experience that I can take the same job as a white man, and come up with a totally different set of, “Hey, have you thought about this? Hey, have you thought about that? Hm, from my experience, how you’re thinking about that is going to harm people.” Blah, blah — that kind of thing. So, that’s what you need. And you need all of us at the table. And so, it can’t be these siloed things. This needs to be fundamental to how you do your businesses. This is why I am not a inclusion and diversity expert, I just find that the majority of the companies that I work with are not ready for D&I. They’re not ready for inclusion and diversity, and so they keep doing these things, these little one-offs — and this is why one of the #causeascene guiding principles is, “Intention without strategy is chaos.” Because you have an intention; it’s not rooted in anything, and you do these things, these initiatives, and then they fundamentally cause harm to the most vulnerable in our community. So, going back to core values — I had to give you that background — so I always start with core values because that helps a organization understand where we’re going. So, you can put these aspirational things on a wall, but for me, core values are the thing that every decision within a organization needs to be made from, and that needs to come from every person in the organization. Core values are the thing that scales; that helps you ensure that inclusion and diversity are fundamental to your organization and that you’re able to measure. So, the story I was telling is, I had a client early on who, one of her core values was beautiful things. So, many core values — most core values start with adjectives which are really squishy. She used to call them squishy. They’re these heart things. This is these things that make you feel good inside. This is these aspirational things that you want to bring into the world. But how do you do beautiful things if your job is a oil rig, right? You know, it’s like, how does that make sense? So, based on her business that she was trying to build, or her ideas, she was trying to iterate on — oh, let me stop here. Also, the Lean Model Canvas does not make you a business. This is how you iterate a product or service. I have to say that. Marcus: I’ve heard you say that. Yeah. Kim: Because I’m so sick of people acting as if. We’re using it for something it was not designed for. The lean model canvas was designed to help you iterate product or service; go to market stuff. It’s not about putting a business structure in place, so you do not have a business if you don’t have procedures, policies, and processes. So, with her business — like I said — core values was beautiful things. She’s like, “How do I keep the essence of what that is?” So, by the time we got into the process she saw, for her, everything that she did had to be viewed through the lens of beautiful things because she had three core values; that was one of them. So, how she and her team craft an email is more important to her in her business than somebody else’s business. If that’s the first contact someone as a customer or client has with her, that email has to be beautifully written. It has to mean something to her. If she has an on-site location, the toilet paper, she has to think about what kind of toilet paper she’s going to have in the bathroom because she wants customers who have that experience of beautiful things when they even go into the bathroom. That’s different from somebody else who doesn’t have that. You come into my place, whatever I use, you use, you know? [laughs]. Marcus: [laughs]. So, in her case — and I’m curious, the first thing that came to mind, you mentioned oil rig, right? I’m guessing she wasn’t an oil drilling company. Kim: But let’s pull that out. Let’s say it was an oil drilling company. Marcus: Mm-hm. Kim: So, I talk about shareholder value versus stakeholder value. So, shareholder value is what is legally in the law for what most people are running for these IPOs. It is shareholders are the only thing that matters. It’s actually written in the law if the company leaders do something or make decisions that shareholders feel is not going to help with shareholder value, they can actually be sued. So, this is where I take issue with politicians making these grand statements about keeping jobs in America and what they’re going to do, da da da da. The average layperson who doesn’t understand this believes that because they’re speaking to your emotion. That is not how this actually works. If the board of directors is in place, the CEO is in place to serve shareholder value, period. And they’ve got to figure out how to do that to increase shareholder value. Versus stakeholder value — and this is the order it goes in, and you need all four of these: first, you have to prioritize who works for you because the people who work for you, if they’re prioritized, they feel psychological safety, again, in this information economy, in this knowledge economy, then they are going to produce the best; they’re going to provide you with the tacit knowledge you need to innovate, differentiate, and compete. Then you have to think about who you partner with because we see constantly how organizations who are trying to align to make decisions based on their core values end up partnering with organizations who do things that mess up their reputation, and all kinds of things. Then you have to think about your customer, or your client because when you think about that, they may start using your product or service in ways you had not even thought about, which could cause harm. So, you have to think about them next. The last is, who’s invest in you because if all the, who works for you, who partners with you, who buys from you is taken care of, investment happens. Shareholders have the investment they thought about. So, going back to the oil rig example. We haven’t seen it. And again, this is where we talked offline about — and I want to redefine capitalism without white supremacy: the economics of being anti-racist. We haven’t seen it because there has been no incentive to do this. So, if I was imagining a oil rig, if I was in the oil business and I wanted to create a business that spoke to my core values, so the core values of the #causeascene — well I call them guiding principles, but they could be seen as core values of the #causeascene community. One is: tech is not neutral. Two is: intention without strategy is chaos. Three is: lack of inclusion is a risk management issue, and four is: prioritizing the most vulnerable because they flow down. Once you understand the tech is not neutral — that’s a big one because everybody keeps thinking tech is neutral, so that’s the biggest one, we got to get over that hump. Then we get to intention without strategy is chaos. Don’t care about your intention, it’s impact that we need to be thinking about. We need to be thinking about how all the hypothetical ways, all the edge cases — because that’s another thing, we don’t want to talk about edge cases. Edge cases are where the harm comes, so we need to think about who can be harmed by that. So, it’s not about intention, it’s about impact. Then we need to think about lack of inclusion as a risk management issue. And we’re seeing a lot of that now. At some point, we’re really going to start seeing people getting sued over the lack of inclusion. And then, once you understand that, then you understand why we have to prioritize the most vulnerable. Okay? And so, I want to stop here and give two definitions. Diversity, I define diversity as variety. It’s that simple. I have a Crayola box. If I have a four-count, there’s not much diversity here. I’m an artist. I am not that good with four-count Crayola. It’s going to be an ugly picture. Sixty-four box Crayola is where the variety is. I’m still going to have an ugly picture, but it’s going to be a colorful picture. I can make up colors, I can combine, I can create something with sixty-four that I couldn’t create before. So, that’s about recruitment. So, this is how you need to think about — when you think about diversity, that’s where your recruitment comes in. And what are you putting in place? What communities are you being uncomfortable in, so that you can build relationships so that you have the diversity that you need? And then, I define inclusion as my lived experience, how comfortable, how safe, I feel; psychological safety. And you as a person can’t tell me that I’m included. I can only tell you that I’m included, and that’s about retention. That’s about once you get me in the door, how safe and welcoming I feel is how long I stay, and how willing I am to provide the tacit knowledge from my lived experience that you can use to scale, innovate, competitive advantage, differentiate. So, it’s no longer you give me a manual and I follow. No, you need to be prioritizing me so I can help you. And this is where we’re missing out. If you’re not failing on the first one, which is recruitment, they’re quickly leaving because you’re expecting culture fit. Culture fit is the antithesis, is the opposite, is the killer of inclusion. It is no longer about assimilation, it is about accommodation. Every time you bring someone new into your organization, your organization should be expected to change. It needs to change. So, said that. So, if I decide, hey, I see an opportunity in the oil industry. I’m going to think about, based on my core values of the #causeascene guiding principles, what is the strategy for ensuring that the most vulnerable are protected? Yes, I want to make a profit. Yes, all of these things, but I need to think about mitigating harm; that’s going to be a priority for me. That is not — as we’ve seen in years, and years, and decades, and decades of the oil industry, that is not a priority for them because it hits on — the third one is lack of inclusion as a risk management issue. For them, they have a risk management strategy that says — and they have lawyers, and if we do this, how much is this going to cost us? Are we willing to take that hit? Marcus: Mm-hm. Kim: So, their risk management strategy is based on reacting. We screw something up, this is what it’s going to cost. This is why we have insurance, you know? This is what it’s going to cost. And we’re willing to do that so we can keep doing our own thing. For me, I’m going to say, what is the potential for harm? I’m going to put more money in the front end to minimize that. Not saying that I’m ever going to eliminate it. But I’m going to minimize that so that in the long run, my legacy of harm is substantially lower in an industry that has historically been harmful. Now, if I’m looking at my stakeholders, and particularly with how people are socially conscious now, think about what kind of investment I can get versus someone who’s causing harm. Marcus: Sure. Kim: Think about the kinds of people who are now re-envisioning how we can do things that have been traditionally harmful, and want to be a part of that, and financially support that. I’m not going to have a problem getting funded. Marcus: I want to go back to something and ask you about this idea of, “Minimize the potential for harm,” Because you mentioned that was going to be one of the first things you thought about as you designed your business strategy. Kim: Specifically in the oil — when we was talking about oil. Marcus: Yes. Kim: Mm-hm. Marcus: As we were talking about. And yeah, I’m really curious, and you can use the oil industry or any more practical example you want, but what are some, sort of, practical ways that companies that you’ve seen or you’ve helped people actually minimize the potential of harm? What are some things that get put in place that have that impact? Kim: Well, I’m going to be honest, many of them don’t even think about it. Because it’s not in their perspective. They do not have the diversity at the table to even help them identify potential for harm. Most companies don’t understand harm until it happens. And unfortunately, it requires — and I say this all the time, just in the #causeascene community, it’s a shame that I have to be harmed in order for the white folx in the community to recognize that there is racism. I have to be physically in a video crying, explaining a situation that happened to me, for folx to say, “Wow, we’re not having the same lived experience,” because nothing about your experience you question, but you question mine because you’ve never had it, so you don’t believe it. And so, this is the problem. You don’t have the canaries that you keep throwing down the minefield at the table. You just keep throwing them down because they’re expendable to you. Again, it’s that calculated risk management that I have insurance for. Marcus: Yeah, that’s what I was thinking. And you used a really interesting term. You said risk management in terms of, well, how much will it cost us? And you actually put it in the context of getting sued — Kim: Yes. Marcus: — and I’ve been in companies that absolutely think that way. I know that that is the, sort of, default stance of, “Well, what will it cost us if someone brings a lawsuit against us?” And I have other questions about this, that I’ll hold, around is that the only real risk, or is that just the risk that we, sort of, see on the books because that’s the one we’re the most used to think of? Kim: That’s the risk that people with limited perspective see. Marcus: Yeah. And now you’re bringing up the risk of competition. Are we competing as well? You brought up the risk of ideas, right? Kim: What’s so interesting is — even in that, that brings up an issue with MBAs to me. I can go in several companies — and you see it all the time — they are not businesses, and this is why I call them companies or organizations. They have nothing in place, absolutely nothing in place, but they have a NDA. Marcus: Right. Kim: Because they know they’re causing harm, so that’s a risk management insurance that they’ve learned that they need to put in place, but that doesn’t change behavior. They’re mitigating the sharing of the harm that they cause by silencing those who are most vulnerable. Marcus: I mean, my perspective on that is, like, I’ve talked to so many people who say, “I’d love to tell you my idea; you just have to sign this piece of paper.” And going back to, you don’t have anything. Kim: A idea is not unique. Implementation is how you differentiate. And see, again, but that’s the industrial thinking. We’re in a knowledge economy. Marcus: So, ideas aren’t where value is. Kim: No, I didn’t mean that. But — ah, you hit it. You hit it. This is where a white man keep getting funded for VC money. They — look at WeWork. Look at Uber. Look at Lyft. Look at all these freaking scooter companies. Look at — Marcus: [laughs]. Kim: They have never been profitable. They had a idea — they had the privilege to have an idea and to get funding to help them iterate to create something that many people — like I don’t get that opportunity. I have to come in the door with a product. I don’t get funding on a idea. If I give funding, it is on a scalable, profitable product already, service already. That right there tells you that you don’t even see my perspective. So, no, a idea is of the privileged. And then, let’s talk about societal, period. Because all of this stuff is connected. We’re seeing this right now. We have been talking about, on my podcast and in the community, about how bad these ISAs — which are income sharing agreements — are within the coding boot camp industry. Marcus: Can you define that a tiny bit for us. Kim: Yes. And ISA is what they’re considering alternative funding solution for college. So, they’re pitching this as a way, so you don’t have to get college debt. So, what it is just, very basically, is you don’t pay upfront. You pay based on a percentage of your income down the line. Sounds great on paper. Yeah, yeah, exactly. Marcus: Yeah. They can’t see this, this isn’t video, but I’m grimacing. [laughs]. Kim: Yes. And they refuse to call them loans. Okay? So, they’re unregulated, and so an ISA is this thing. So, now you have all these people, these white men with these ideas, so the business model is actually the ISA. They’re trying to test what actual industries it works in. So, they’ve hit on boot camps because they realize that there are people transitioning, they don’t have the money for boot camps, da da da da, so it sounds good on paper, and it says we’ll only be charged if you get a job making this amount of money and it’ll be capped at, they make some arbitrary years, after three years or whatever. Folx thinks that’s my only issue — first of all, I’m not going into something, and I don’t know what I’m going to end up having to pay at the end. I got a problem with that. There are also a lot of other caveats. And so, what happens is, they’re also targeting the most vulnerable. They’re targeting people in marginalized communities. And when I define marginalized, I mean people — these are not individual, I don’t talk about the individual, I talk about people who’s society systems have negatively impacted. So, they target these individuals, give them this hope — and okay, let me — this is a whole nother thing — I come from education. So, when I first saw the bootcamp model, I started asking questions then because there’s no one size fits all for education. Many of the ones I’ve seen, the majority of the ones I’ve seen, do not have qualified instructors, do not have qualified curriculum developers, do not have qualified support systems afterwards, do not understand adult learning theory. None of this is about education for me. So, they’re slapping this education model around these ISA, where the business model is ISAs because what they’re doing is once you sign a contract, they’re bundling these things up and selling them as investment instruments. So, they’re selling them off. Marcus: Oh. Kim: Yeah, ex — oh, yeah. It’s a whole, whole nother thing. So, individuals, we’ve been having this conversation for a very long time, and people think it’s just the ISAs. To me, like I say, the ISA is just a shitty cherry on a shitty cake because the curriculum sucks. There’s so many fail points in this. This is not a better alternative than college. I walk away if I finish with a degree. Yes, I have student loans, but I can negotiate with — I might not like these people, but I can negotiate. I can get forbearance, I can get income-based, there’s a whole bunch of options I have. Not when I’m dealing with investors who have this loan, there have been stories of people, this stuff is so bad they drop out, and because they given these individuals permission and look at their tax returns, let’s say you drop out and you get a job in a whole nother industry, meeting that threshold, they’re coming after you for that money. Marcus: Even though the bootcamp didn’t teach you what you’re now using in your work? Kim: You’re not even in the industry. You just left. Marcus: So, you’re the product at that point? Kim: Yes, you’ve been the product. Marcus: Yeah. Kim: And no one wants to talk about that. So, that’s not prioritizing the most vulnerable. So, I’m not going to ever create something where I see the potential for harm. Going back to that: that is a potential for harm. And it’s a potential for harm, and they know it’s a potential for harm because, as I’ve said before, I’ve not seen — there are people who succeed in boot camps, but the overwhelming majority of people who succeed in boot camps already have some kind of technology background; they’re engineering; they’ve been doing it on the side. You know, like, very seldom is it somebody who does not know a variable when they walk into a boot camp. So, because it’s not enough for them to be successful and for VC money, now you have to bring in the spin doctors for the sales, and the marketing department, and pitch this for everybody. Oh, everybody, this works for everybody because you don’t have enough of the people who you know are going to be successful. That right there is understanding, you already know the potential for harm, and yet you’re still moving forward. Marcus: Hmm. Oh, that sparked in my brain, like, I wonder if we analyzed companies, and we looked at how much that proportionally they were spending on sales and marketing to bring people in, is that an indicator that you know you have to polish the turd that you have? Kim: Yep. Think about — I just saw something recently with 23andMe. So, when these things first came out — all of them: ancestry, all of those things, when they first came out, they were pitched at get to know your lineage, da da da da da da da, right? Those lineage ones have been debunked as there’s no way you can know where somebody is exactly. So, that’s that one. So, that’s how they started, was — because my mom did one for National Geographic trying to figure out where we are because as Black people, we don’t know, you know? Black folx in the United States have no clue. If the slave owner kept good records, maybe. But we don’t know. But then you get into the DNA ones. And I was first cautioned — first red flag went up when I saw that Spotify was using were your regional DNA thing was to create playlists. Yeah, exactly. Marcus: Hmm. Kim: So, if mine is Ethiopian, West African, Native American, or whatever, they were going to create a playlist based on the regions of my thing. So, people are just handing over — so when you sign these things with these organizations, they don’t say, we’re only going to use it for this, and we’re going to destroy it — like, just to test this for you, and go and destroy it. No: they hold on to this information. So, what just came up with 23andme is, they’re selling it to medical places. Your DNA is the most personal thing I think we ever own, and now it’s being commoditized, for-profit for organizations that you never intended. So, this is how when you say polish a turd when there’s something new like AI, machine learning, drones, they always bring it out is this fun thing first. The first thing we hear is the fun thing. So, everybody wants this thing for Christmas, and pass out because this is going to be fun. It gets quiet when they start using it for medical, military, all of these for-profit things that can potentially harm us and they’re using our personal data for that information. Yeah, exactly. But again, that’s industrial thinking. Think about pharmaceutical. When you get a new product, you bring out the sales and marketing team to get their product going. And then, you whisper the side effects. Marcus: Spoken very quickly, in fine print. Kim: Yes. And death is -psh- [laughs]. Marcus: Yeah, death. [laughs]. And we all know that some of those side effects sound far worse than whatever it is you’re trying to cure. Kim: And, let’s talk about silos. Again, this is all these things about systems. It doesn’t talk about — in that commercial, how that medication is going to interact with other medications you may be taking, how it’s going to interact with your lifestyle. It doesn’t talk about those things because we don’t have the research for that. It makes it as this one thing is this magic bullet, and we don’t think, again, about the potential for harm. I can pop this up in every place, but that’s because I’m from a marginalized community. And these are the things we think about on a daily basis. Marcus: So, let’s rewind. You gave two really beautiful definitions for us. And I’ve written them down here. So, diversity is variety. And inclusion is lived experience. Did I get those right, roughly? Kim: Well then, let me break down how I do the whole thing. So, I start with privilege. Privilege is only about access. It’s about who has access and who gets to yield access. So, people with privilege have access and they can decide: eh, I want to use it today; eh, I don’t want to use it today. That’s what privilege is. And then, I go down to underrepresented. Underrepresented is only about numbers. I have five oranges; I have 20 pineapples; the oranges are underrepresented. That’s what that is. So, I’m saying this because people get all upset about terms. Diversity, like I said, is just about variety. Marginalize is about treatment of people, groups of people. The poor, oppressive treatment of people and inclusion is about my lived experience. Marcus: So, I think we kind of started with the idea that many companies that are thinking about these things aren’t ready to really — Kim: Oh, no, there — mm-mm. Marcus: They’re, they’re far from it. So, and we started with, if we rewind back to values, we’ve come a very interesting path. But how can somebody who’s listening, who’s saying, okay, we’ve got some values on the wall? I don’t really think they matter a whole lot, because some executive put them there, and they’re in a nice poster. Kim: And that’s when I used to start hurting client’s feelings when I break them down because if they can’t be measured, they mean absolutely nothing. Marcus: Can you talk us through that process of how somebody starts to change their thinking? How do we move from aspirational — I think that beautiful things — to something that is measurable and meaningful? Kim: So, again, I gave you an example. It may seem trivial to you, but what toilet papers you use is measurable. Marcus: Hmm. That’s true. Kim: That’s measurable. The email she receives, and the responses that people — that’s measurable. We got to stop looking for simple solutions to complex problems. Each situation is going to be absolutely different. So, if your core value is delight, which is one of my customer’s core values, is delight. But it’s balanced off to other core values that they have. They’re looking at, does this delight? Does this make — again, they’re looking at the stakeholders? Let’s go through the stakeholders; who works for you? Does doing this thing or implementing this thing on a website, does it bring delight to the people in our organization as we implement this thing? Are they happy to see it? Is this something that they’re proud of, and they can say, “Hey, we did that.” Then you look at the partners, the second level of stakeholders. Are they saying, “Hey, this thing that you implemented, we can align with that. That aligns with our core values. That aligns with how we’re — oh, my God, that fits with this product we have over here that we didn’t even know we could — we’re trying to figure out how we can do something with, that totally aligns with that. So, now we can bring that thing in to support what you’re doing.” Then you look at the next level is customers and clients. Are they having a delightful time using this thing? Are they having a delight interacting with this thing? And then, you have investors. Has this delight trickled down to touch their pockets? Are they — not just money-wise, are they excited? Does it bring them delight and say, “Hey, I am an investor in this company. Oh my God, I just received this delightful, or this beautiful email of beautiful things. I just received this beautiful email from the company I invested in. Let me show you that.” Or they’re just around in conversations at church, at the synagogue, at the coffee shop, and they just want to tell somebody about this organization that they support. That’s how delight can be — and there are millions of different ways that delight can show up within that hierarchy of stakeholders. Marcus: And in each of those, we could put measurements in place. Kim: Yes, exactly. So, the first one is with the employees. How many errors, implementing this? Are we learning from the errors? Because that was the whole thing about move fast; break things. I have no problem move fast; break things, but there has to be a point where you have to move fast; break things, and let’s learn from what we broke, so we don’t do that same thing. So, we’re improving each time. If you’re doing a sprint, is the process of doing the sprint — oh my God, you just really breaking it down for me. Okay so, if you know you do stand-ups. Do I feel psychological safety in a stand-up, in this open environment, that, “Hey, I see a problem on the horizon?” Do I feel safe enough to say that, or I’m going to bite my tongue because every time I open my mouth, somebody says something smart. I get silenced. That’s delight. That’s one place. So, with partners, do they feel delight and safe in saying, “Hey, I see what you’re trying to build here,” so that going back to that standup, that can be measured. So, now we’re talking about the partners. Okay, I decided that, “Oh, I see a partnership here.” Do they feel comfortable in saying, “Hey, can we partner on this thing? I know we have this partnership over here, but oh my God, I see the value in doing this over here.” Do they feel okay in saying that? Or they say, we’ve been playing around with this thing a little bit and we see this hole. Or we’ve partner with you, and we’re getting a lot of pushback from our customers based on this thing. That’s measured. So, now, customers. Do customers have an easy way to give you feedback? And are you listening there? Are you getting back to them based on 100 blah, blah, blah, this is what the da da da. And are you being transparent about that? That’s measurable. And investors? So, not just the pocket, but you can see, let’s say you have shares out there. Hmm, let’s give them a code. How many of your current shareholders are bringing in new shareholders? What stories are they telling? Are they writing blog posts? Are they talking about you in the press? That’s measurable. Marcus: My head is spinning. There’s a lot here. There’s so many directions we could go and it made me think of a — maybe our slogan should be, “Move fast and learn things.” But I suppose — Kim: Oh! Marcus: — that — I don’t know. Maybe that’s just me. I like to learn things. Kim: Well, okay, let me talk about that because I’m finishing up my doctorate program. And my theory that I’m couching my research in is called learning organizations. Marcus: Ooh, now, I’m excited about that. Kim: Yeah. It’s a theory from Peter Senge. Marcus: Yes. Fifth Discipline. Kim: Yes. And so, it’s about, not organizational learning people. It’s about learning organizations, and a learning organization is all about learning. And so, when you’re in a space where the culture supports learning: good, better, and different, people feel safe. So, if the priority is learning, then you structure a business around supporting that, which means the inclusion/diversity piece has to be there. People have to have the psychological safety to feel that whatever they’re learning, they can share, whether they made a mistake, whether anything. Marcus: Yeah, absolutely. And we know that psychological safety is a very popular topic these days. Not only is it popular and important, but I think most companies are talking about it; I don’t know that a lot of companies are doing anything about it. Kim: Most aren’t doing it. Again, it’s like inclusion and diversity. Like you said, it’s the thing to talk about right now. Like, we’re talking about empathy, compassion. It’s these buzzwords, but when I actually sit people down and ask them questions, they can’t answer them. Marcus: I remember reading — after I read Fifth Discipline, I did some more reading — and if you haven’t read the book — I know you have, but if listeners haven’t read it, I highly recommend it as kind of an introduction to learning organizations — but I remember them saying — and it’s been a while — but they said, we really haven’t seen companies do this. And we’ve been looking. Kim: And this book was in 1990. Marcus: Yeah, it’s old. Kim: And so, yeah, and this is why, again — so let’s talk about the conversation we were having offline, this is why I want to redefine capitalism. We have not seen capitalism in a way that doesn’t cause harm. People, by default, ascribe these harmful oppressive things to what capitalism is. Capitalism is just a theory. It’s how we’ve implemented capitalism that is oppressive. And that goes for socialism, fascism, Marxism, communism. They’re all rooted in white supremacy. I want to — based on what I’ve learned, we haven’t seen it. None of what we’re trying to create now was supposed to happen. So, we’re all making this up. So, this is the thing I want — we’re all going to make mistakes. We’re trying to figure this out. I am a descendant of slaves. I should not exist. Based on the system that was put in place, I should either not exist, or I should still be in slavery. Things change. Marcus: In unexpected ways, every day. Kim: Exactly. And I get it. And what led to emancipation. It wasn’t that Lincoln had some strong belief against — no, he was going to do whatever it took to keep the union in place. He don’t have a strong feeling about the morality. He wasn’t a abolitionist. For him, it was keeping the union in place. It said, “If I could keep the union in place and still keep slavery, I would. If I can get the union in place and get rid of slavery, I would.” Marcus: Because that was the number one goal. Kim: Yes. Marcus: Unification. I’m going to ask you a hard question, and I think — well, not hard for you; maybe hard for me. Kim: I was about to say please don’t project that on me. What is it? Marcus: No, this is my hard question for me. You’ve used this term a few times, it is not a term I’m super comfortable with, so I’m just going to own my discomfort. White supremacy sounds like the other person, not me. Kim: Nope. And this is why — okay, so now, I hope your guests are ready, your audience is ready for this. Marcus: Here we go. Kim: Here is my default. All whiteness is racist by design, and cannot be trusted by default without consistent demonstrated anti-racist behavior. And I say that because you are raised in a system — and this is a global system — that does not question or examine whiteness. You have been taught that you’re a individual, I am a group. You have been taught that your individual efforts make you who you are. You have not examined that there are systems in place that make you not mediocre. But you also do not translate that to me, as I am a representation of a group of Black people. And if one of us makes a mistake, it’s the whole group of Black people that makes a mistake. I’m not an individual. I am based on this system. I’m a Black woman, which means in this system, I’m an animal; I’m not even human. This is why people can say what they want to me, or they think they can. People can approach me and say — it’s so — it is by design. Also, I say in a system of white supremacy — so there’s levels. All whiteness is racist by design. You had no choice in it. Then there’s underneath there, it’s called the model minority myth, and that is, the minorities that are allowed to come in this country only if they behave in such ways and are in service to white supremacy. Which means there’s a whole lot of anti-Blackness there. You can do anything as long as you’re not one of them. And then, Black folx don’t escape it either. We have a whole lot of internalized white supremacy and anti-Blackness within ourselves because we are also a part of a system that told us we were nothing. So, that’s why you see colorism in our community. That’s why you see, assimilation. That’s why you see President Obama talking about, “pull your pants up,” and all this other stuff. They’re talking about a group of people, and we never talk about the systems that are in place to create these things. If we were all equal — and this is why I could care less about equality; it’s about equity. If we were all equal, and the research has proven, all women in childbirth or childbearing age — pregnancy would have the same outcomes. We’re not. Black women, no matter the education level, or the amount of money they make are still disproportionately dying in childbirth, and their babies are dying in childbirth compared to white babies, and that is even compared to what we consider poor white people. So, yeah, it’s very uncomfortable, it’s very uncomfortable for a lot of white people because, for many of you, 2016 was the eye-opener for you. You thought we were in post-racial, just because you gave your vote to a Black person. Oh my God, we’re post-racial. No, he was the perfect Black man for you. He was like the Michael Jordan. He was like the Beyonce. They’re different from everybody else. So, yeah, it’s very uncomfortable for you, but think about what I have to do every single day. So, I could care less about your discomfort. This is another reason why I no longer recommend White Fragility by Robyn D’Angelo because white people have used this to learn the vocabulary of wokeness without taking responsibility of the harm that they cause. Everything has cause and effect. So, when I say something — and I’m going to use this as an example. So, white fragility is an academic term. And that’s another thing. She used it as an academic term, but in the wilds, people are using it totally different. So, she used white fragility as a way to explain white people’s reaction to when race conversations come up. They get defensive and that’s all it was. And I’m going to be cautious about — because I don’t know if her research was to talk about the cause and effect of white fragility; it was just to explain. People have taken that as the Bible, and have run with it, and they don’t talk about when white fragility is engaged, there’s an effect of that. You get defensive; you do something. It does not just sit there. If you get defensive, you attack, and I get harmed. So, that’s what people aren’t talking about. So, again, it’s intention versus impact. And all I care about is impact. I could care less about your intention. So, right now how people are using white fragility, as you’ll see it on Twitter, somebody say something, and like, “Oh, they’re white fragility.” No, no, no, no, no, no, no. We’re going to stop using that because it absolves a white person from taking responsibility for what they did or what they see. Marcus: Has it become an excuse? Kim: Yes. Yes, yes, yes. “Oh, it’s my white fragility. So, I don’t have to apologize. I don’t have to make amends. I don’t have to do anything I’ve caused active harm, but I didn’t mean to because it was my white fragility.” Marcus: So, when we do harm — you just named a couple of things that I think I learned in kindergarten is pretty good steps when we’ve hurt someone: taking responsibility, apologizing. Is there — and like — I’m trying to form words right now because my brain is going in so many different directions. But I opened up this box here. So — Kim: Yep. Marcus: — I am here and I did it knowingly. And actually, it was one of the things I was looking forward to, Kim, to talk about. Because it is an area that a good friend of mine and I are challenging each other to — and you might know him, but I won’t say his name on here. But when we do find we’ve harmed if we stop saying, “Well, it’s my white fragility,” or, “Here is my excuse,” what is even a reasonable next step? Kim: So, it’s about owning that, first of all — okay, so let me back up. I have, and people get mad at me, but I have five white friends. I have five people, maybe six — I like to keep it on one hand — white people that I seriously, seriously, call friends. Literally. And these are people who have consistently shown me that I can trust them when they see me getting beat up, and they can tell by the tone of my tweet, they’ll DM, “Hey, let’s have a conversation. I know you need to unpack this — ” because I’m an external processor. And they just take whatever. They’re what I call power allies. There are people who are willing to make themselves uncomfortable so that I can be comfortable. Not many of you are willing to do that. But even then, they know that because of whiteness — and this is why I put whiteness and Blackness on the same level because if I can’t have a conversation with your individual because that, again, is superiority right there, you treating me as a group. So, even the languaging is about white supremacy. So, note, we’re not going to talk about white people — a white person as an individual. I’m going to talk — if you talk about Black people as a group, we going to talk about white people as a group. Right? So, I talk about whiteness versus Blackness. Even in your whiteness because you are designed based on the systems — oh, particularly school. We tweeted about this recently, how different — and there’s an article about this — how different California textbooks are compared to Texas textbooks. And reinforcing white supremacy. And Texas textbooks are used around the country. They are — yes schools. Districts around the country are using — have traditionally uses Texas books as the staple. So problem right there. So, they know as friends, that they can actively cause me harm. Know it off the bat. So, they know that they’re racist and that they can cause me harm. So, what they do is — and this happens — I am comfortable enough — and it’s again, I say it’s unfortunate that I have to be in pain for them to see the harm that they’ve caused. So, they see the look on my face. They hear the pain in my voice. They don’t want that. Most people don’t want to be complicit in the harming of people that they care about, or even the people they don’t know. They don’t want to be complicit. And so, what they do is actively understand that once that happens, it’s not picking up where you left. And this is where the problem — they’re like, “Oh, I said, I apologize. Let’s go on.” Mm-mm. No, you starting back from square one building that trust over again. And if you’re not willing to start back at square one, then you’re absolutely of no service. So, let me give you an example. I also have privilege that others don’t have. So, there was a conversation that I was trying to have, and people kept coming to me, and I didn’t know how to have it without causing harm, but I went in it conscious that I was going to cause harm and it was about trans women. White trans women, and the harm they’re causing to brown and Black trans women as well as brown and Black women in lesbian, non-binary communities because when whiteness comes into the room, it centers itself, which causes harm. And so, in having the conversation, I retweeted something because it was in the thread. And I was only talking about one piece of the tweet. But the thread was harmful. The languaging was harmful for trans individuals, trans women. When they brought that to my attention, I deleted it — and I was understanding, I deleted the tweet — because it was just a top part, and I didn’t even know you could delete parts of a thread without deleting the whole thing, and I immediately went to one of the women who was helping me understand that — also, let me do a caveat. I don’t have to understand, to know that there’s potential to harm. All I need is for you to tell me that there’s potential to harm and I’m going to stop. I’m going to do whatever I can to minimize it. That’s another thing of whiteness and that’s why I don’t get into debates with people. I’m not going to debate my lived experience. Not going to do that. Just because you don’t have the lived experience, you’re not going to put me in a situation where I’m debating my existence with you. Not going to happen. So, as soon as I become aware of there’s a problem, I recognized causing harm, I was back at square one. Immediately, I went to one of the women who was explaining the situation to me, “Hey, can we do a podcast episode?” That happened on a Thursday or Friday. That podcast episode came out there following Wednesday, explaining it all. So, it goes back to your move fast; learn something. So, I was like, I made this mistake. I did it publicly. Let’s talk about this publicly. Marcus: That really puts your money where your mouth is, right? Kim: Yes, exactly. And it also helps us understand that we all have privileges, and we all have a responsibility, and this is what pisses me off with these white dudes and white women in tech who have these huge followings on social media based on some technology they know, whatever it is, and then they want to wade into what people call the [unintelligible], so justice issues and they go and fuck up everything. And then, when they get called out, they’re like, [wails]. No. No, you’re not expert in this. Stay in your damn lane. Then they go and delete their Twitter for a few days because they know when they come back and do this apology after they deleted the tweet, it’s going to be totally out of context. People going to be like, oh, you learned, but they’re not going to know what the hell happened before. And so, they get pats on the back, and then they go on about their lives. That is harmful. That’s toxic, and that’s causing harm, and I’m no longer, no longer standing for it. Marcus: It sounds a little bit like PR. Like, you leave and you say this kind of a, yeah, you go through the motions in a certain way, and you expect it to have a certain outcome. Kim: Well, it is because we’ve heard people who pissed off because now they’re losing income from it because we’re calling them out. So, yeah, it’s part PR, but it also is that whiteness thing. I’m going to protect whiteness and my reputation at all costs. I don’t care about who I throw under the bus for this. And that is why I say no one escapes white supremacy and whiteness unharmed, and then includes white people. Marcus: Mmm. I think that is such a wonderful way to end the show. To be honest, like that one statement is a nugget. I know it’s going to be ringing in my head all day. Kim, where can people find you, and engage your work online; engage your services? Kim: Okay, so let me make this caveat. I’m found on Twitter at @KimCrayton1, that’s K-I-M-C-R-A-Y-T-O-N-1, and I’m spelling it out because I vet every follower I get. I cannot afford to have people come in the space who are going to cause harm to my community. So, I’m going to look at your timeline. I’m going to look at your followers. I’m going to look at who you’re following. And as an educator, I didn’t do this early on — and I wish I had. But I created a echo chamber for myself and for my community because we deserve to be safe. I owe you absolutely nothing. So, your voice does not need to be heard. So, if you want to come in, learn, sit down, and listen, watch. But you don’t get to question anybody’s lived experience. If you don’t understand, do your homework. I am on a doctorate level, I have no time to be teaching pre-k anymore. That’s why we don’t get anywhere because we continue to have to teach white people the basics of racism, and that slows us down, and it’s a distraction. So, that’s the one part. That’s the advocacy work I do. If you are a business leader, and you’re looking for coaching, you can go to hashtagcauseascene.com/coaching, and I work in a minimum of six-month contracts because I’m no longer doing workshops, and whatever because you cannot change this in a workshop, and I will not have my name used to say, “Well, we worked with Kim.” Nope, it’s not going to happen. If you’re not willing to do the real work, do not waste my time. I have a huge vetting process that many people get very offended by which shows me their fragility. They just hit it right at the beginning. It’s very obvious at the beginning, that you’re not ready for me, unless you’re ready to be honest, and create a product or service, create a business that allows all your stakeholders to thrive. If that’s not what you want, then don’t bother me. Marcus: Kim, thank you so much for being on the show. Kim: Thank you and have a wonderful day. Announcer: Thank you for listening to Programming Leadership. You can keep up with the latest on the podcast at www.programmingleadership.com, and on iTunes, Spotify, Google Play, or wherever fine podcasts are distributed. Thanks again for listening, and we’ll see you next time. The post How Core Values Influence Diversity and Inclusion with Kim Crayton appeared first on Marcus Blankenship.
https://medium.com/programming-leadership/how-core-values-influence-diversity-and-inclusion-with-kim-crayton-4ec433ea6168
['Marcus Blankenship']
2020-06-25 15:23:49.198000+00:00
['Management', 'Software Development', 'Startup', 'Technology', 'Leadership']
Remembering the Comic Genius of Fred Willard
Last Friday, we lost one of the funniest and hardest working men in show business when Fred Willard passed away at the age of 86 from natural causes at his home. Fred Willard is among the most recognizable faces in Hollywood, even though he never quite became a household name. Much of his critical acclaim and fame was due to his role in a string of iconic mockumentary feature films. These include Rob Reiner’s 1984 classic This is Spinal Tap (about a fictional heavy metal band) and a quintet of Christopher Guest films (1996’s Waiting for Guffman, about community theater; 2000’s Best in Show, about the world of show dogs; 2003’s A Mighty Wind, about the reunion of long estranged folk music bands; 2006’s For Your Consideration, about Hollywood awards season; and 2016’s Mascots, about, well, sports mascots). His roles in these films varied widely, but he always delivered a fully committed and gut-busting performance that gelled perfectly with the brilliant ensembles. His film career was not constrained to the mockumentary genre, but it was almost exclusively in comedy. Among other memorable films he appeared in over the decades included 1977’s Fun with Dick and Jane (with Jane Fonda and George Segal), 1987’s Roxanne (with Steve Martin and Darryl Hannah), 1999’s Austin Powers: The Spy Who Shagged Me (the second film in the Mike Meyers-led James Bond spoof trilogy), 2001’s The Wedding Planner (with Jennifer Lopez and Matthew McConaughey), and 2004’s Anchorman: The Legend of Ron Burgundy (with Will Ferrell and Paul Rudd). Perhaps his most unique film credit came from his role as a CEO in 2008’s Oscar-winning Pixar film WALL-E, which marks the only live-action speaking role in the animation studio’s 22-film catalogue. Willard’s work on the small screen was equally, if not more, memorable than his film work. After appearing in small guest roles on smash hit comedies of the ’60s and ’70s like Get Smart, The Bob Newhart Show, Laverne & Shirley, and Love, American Style, he gained cult fame and critical acclaim for his role on Fernwood 2 Night, a satirical comedy about a talk show that was developed by television legend Norman Lear and aired every weeknight in syndication for several months in 1977. By this point, he was well known enough to headline an episode of Saturday Night Live (which aired in 1978 with musical guest Devo). He continued to be a regular presence on television for the next 40 years. He appeared in episodes of an astonishing array of hit comedies (and the occasional drama), including Friends, The Golden Girls, Murphy Brown, Mad About You, Ally McBeal, Married…with Children, Mama’s Family, The Love Boat, Family Matters, That 70s Show, The Drew Carey Show, The Closer, Castle, and Hot in Cleveland. In recent years, he also had arcs on the soap opera The Bold and the Beautiful and made a series of appearances on Jimmy Kimmel Live. Willard was best known to me for a trio of classic series on which he appeared in extended arcs. He appeared on eight episodes of Roseanne as Scott, the husband of Roseanne’s boss Leon (Martin Mull). Leon and Scott were most certainly two of the highest profile LGBT characters on television at that time due to the enormous popularity of the show and were among the first depictions of gay marriage ever on network television. He appeared in thirteen episodes of Everybody Loves Raymond as the conservative and uptight father-in-law of Robert (Brad Garrett). He fit in perfectly to the esteemed ensemble and showed remarkable chemistry with his on-screen wife Georgia Engel (who we also lost last year). Most recently, he played the goofy and fun-loving father of Phil Dunphy (Ty Burrell) on 13 episodes of Modern Family. The 11th episode of Modern Family’s final season showcased Willard in a heartbreaking episode that ended with his character’s death. This episode aired just a few months ago, underscoring how hard Willard worked right up until the end. Clockwise from top: Fred Willard on “Modern Family” (Copyright: ABC), “Everybody Loves Raymond” (Copyright: CBS), and “Roseanne” (Copyright: ABC) Fred Willard was a quintessential character actor and a comic genius. His droll delivery and seemingly effortless ability to wring laughs from virtually any role in any project was legendary in the industry and he was instantly recognizable to millions of viewers, even though he rarely got top billing, media buzz, or awards attention. Despite over four decades of brilliant performances, he only scored four Primetime Emmy nominations (three for Everybody Loves Raymond and one for Modern Family, all in the category of Outstanding Guest Actor in a Comedy Series) and no wins. He did, however, score a Daytime Emmy win in 2015 for his guest appearance on The Bold and the Beautiful. Thankfully, we have at least one more chance to revel in Fred Willard’s comic genius. On May 29, Netflix releases one of his final projects — the comedy Space Force. The show is a satire of the current administration’s new sixth branch of the military focused on outer space, and features a brilliant ensemble that includes Steve Carell, John Malkovich, Lisa Kudrow, and Jane Lynch. And, thankfully, we will always have his decades of brilliant film and television work to make us laugh on even the darkest of days. Rest in peace, Fred. I hope you know how much you were loved.
https://medium.com/rants-and-raves/remembering-the-comic-genius-of-fred-willard-344e0bb78558
['Richard Lebeau']
2020-05-21 16:34:34.984000+00:00
['Movies', 'Comedy', 'Television', 'Society', 'Culture']
Data Talks
I remember around 3 or 4 years ago, there was a saying that I used to hear a lot: “Money Talks”. Now I can see that this no longer holds; the better way to phrase it would be: “Data Talks”. We are now in the Cognitive Era, all the buzzwords in the media are around data: Cognitive, AI, Machine Learning, Big Data, Data Analytics, etc… The big tech players are now shifting their focus as well. Amazon, Google, IBM, Microsoft, and now even Alibaba and Oracle are joining the race. Last week, Gitex tech week conclude in Dubai, and some of the major highlights were around data, from visualizing data to predicting city-wide events, to bring the cloud to the masses; everything was around data. Smart Dubai Office constructed a giant visualization of all the data flowing in Dubai (called Future Live) with the aim of using this data to prepare for Expo 2020. There was an entire hall in Gitex dedicated just for cloud featuring the typical Cloud Giants plus many newer entries. IBM showed a citywide solution called MetroPulse which helps understand city dynamics and how different events and data in the city correlate and affect each other.
https://medium.com/astrolabs/data-talks-10f348bd5c11
['Aoun Lutfi']
2017-10-16 12:24:41.617000+00:00
['Data Science', 'Data', 'Insights', 'Artificial Intelligence']
How to Choose the Right Number of Clusters in the K-Means Algorithm?
How to Choose the Right Number of Clusters in the K-Means Algorithm? What is Within-Cluster-Sum-of-Squares(WCSS) in clustering? The Elbow method used in K-Means Algorithm. Before we dive deep into choosing the right number of clusters in the K-Means Algorithm, we first Know What is K-Means Algorithm? Now, the important question is: How to choose the Right Number of Clusters in the K-Means Algorithm? So in order to choose the right number of clusters, we first take an example of this ‘scatter’ plot : Scatter Plot So, our result looks like this, But, how we can able to take 3 clusters for doing categorization? why can't 2 or 4? To answer this, Let’s understand the concept stepwise: Step 1. First, we understand What is Within-Cluster-Sum-of-Squares (WCSS)? WCSS may be defined as an Implicit Objective Function which helps to give the right number of centroids or clusters to include in the dataset. It gives the measure of the sum of distances of observations from their cluster centroids. Step 2. Now let's include 1 centroid in our dataset. Now the value of WCSS is very high because if we do the calculation the sum of distances of observations from their cluster centroids gives a very big result. Step 3. Now include one more centroid that is, include 2 centroids in the dataset.WCSS result is much less as compared to the Step 2 result. Step 4. Now again include one more centroid that is 3 centroids in the dataset. It gives a much lower result of WCSS than Step 3. Step 5. Now, the question is when to stop adding the centroids into the dataset? In order to answer this, let's analyze the sequential steps of adding the centroid. So, According to the above graph, we can analyze the substantial change in the value of WCSS by adding 2 centroids from 1 centroid. Again, see the abrupt change by adding 3 centroids from 2 centroids. By adding centroids from 3 to 10, you can see that there is no abrupt change but a small difference observed while adding new centroids. So centroid 3 is a threshold that gives us a value of how much clusters to include in our dataset? This Method of finding is known as the Elbow method. Now we will do the Practical implementation of the K-Means Algorithm. We first import the data set of people from Mall Dataset, where we are having various categories of customers and we need to do categorization. Dataset We should follow the steps to build a K-Means Algorithm. Step 1. Import the Libraries Import Libraries Step 2. Importing the Dataset Import Dataset Step 3. Split the data into a matrix of features(X)(So we are taking ‘Annual income’ and ‘Spending’ into consideration to do Classification) and the dependent variable(y). Step 4. Now in order to find the optimal number of clusters or centroids we are using the Elbow Method. We can look at the above graph and say that we need 5 centroids to do K-means clustering. Step 5. Now using putting the value 5 for the optimal number of clusters and fitting the model for doing classification. Step 6. Do the visualization of the clusters.
https://medium.com/swlh/how-to-choose-the-right-number-of-clusters-in-the-k-means-algorithm-9160c57ec760
['Manik Soni']
2020-10-04 11:32:42.648000+00:00
['Machine Learning', 'Artificial Intelligence', 'Deep Learning', 'Data Science', 'Data Visualization']
Modulus Mercury Release Notes
New Features: There’s five main new features we think you love! Introductory Tour Guide Wao! So slick! We have a new tour guide that’ll introduce you to the basics of how Modulus works, how courses are laid out, and how to make courses as a teacher and navigate the web application interface. We think our interface is intuitive enough that it’s not really necessary ;) but in case you need it — it’s here! 2. Invite and Block Students Useful for blocking those students that somehow got the join code… The workflow’s been revamped from the ground up, and now we have separate tabs for different course views. As a teacher, one new tab is the “Manage Students” tab, which lets you invite students via a Join Code, as well as manage the students that are already enrolled. Each student’s name and email is available to you, and if someone’s there that shouldn’t be, the flip of a switch will remove them. If you want to add them back at any time, simply flip the switch back on! 3. Redesigned Course Editing It’s incredibly simple and intuitive to edit course content (why does no one else do this??) We did mention the workflow’s been revamped, right? Through extensive rounds of user testing, we’ve simplified and streamlined the entire user experience. Editing the course happens on the “Edit Modules” tab entirely now, which means you have a central place to make all the changes you want. Add modules, delete modules, rename modules as you see fit. Module items are edited just as easily — change course contents, links, files, documents, assessments, and lectures in a simple, guided process. Where we could, we even provided guides! So that if you’re new to YouTube or Google Drive, it won’t pose any barrier to getting started publishing your courses online. 4. Vastly improved Course Creation Inspired by program installers, you can now click your way to being a teacher! More user testing meant more suggestions, and one of our favorites was a plan to redesign course creation! Instead of one large course creation dialog on a single page, we’ve split it up across four, focused, concise steps. In order, you can set the course name, course description, course visibility, and course subject, with no impediments to an intuitive workflow. 5. An open library of free, online courses Democracy arrives in the educational sphere at last! Of course, with so many teachers making so many great courses, we couldn’t resist making an open forum for education. After all, Modulus is all about promoting the work of independent tutors and teachers outside of large, locked-in establishments and institutions, and building peer-to-peer networks of students and teachers, rather than hierarchical imbalances of authority figures and their pupils. The logical continuation of that, of course, was an open platform, where anyone can publish their course contents for anyone else to join and use. As a teacher, you can choose to make your course public, and it’ll automatically be added to this open catalog, for anyone, anywhere in the world to join. As a student, you can enroll in as many of these free, open courses as you want, and learn anything available on our site. 6. And so, so much more!
https://medium.com/the-modulus-blog/modulus-mercury-release-notes-5bdbd6d936f5
['Daniel Wei']
2020-06-17 03:40:29.017000+00:00
['Software Development', 'Release Notes', 'Startup', 'Edtech']
Nothing’s More Important Than Showing Up
Photo: vgajic/Getty Images Bestselling author Seth Godin has been writing frequently about creativity on Medium the past few weeks. This week he wrote about the importance of simply showing up. For Godin, creativity isn’t as much inspiration as it is application: arriving on time and doing the work. In a word: practice. “When we commit to a practice, we don’t have to wonder if we’re in the mood, if it’s the right moment, if we have a headache or momentum or the muse by our side. We already made those decisions.” It was key to his own success, he says: “Twenty years ago, I decided to blog every day. There will be a blog from me tomorrow. Not because it’s the best one I’ve ever written, or perfect, or even because I’m in the right mood. It will be there because it’s tomorrow.” I’ve thought a lot about this post as we head into this pandemic winter. How do we face it? How do we continue to do great work? I think Godin has the answer: methodically, regularly. You can find control amid chaos but only if you apply a schedule that gives you space to create. Now, more than ever, structure is a shelter from the storm.
https://forge.medium.com/nothings-more-important-than-showing-up-476900bc1fc7
['Ross Mccammon']
2020-11-13 17:27:08.828000+00:00
['Creativity', 'Self', 'Practice']
How to use Firebase UI to control data flow after signing in successfully(redirect or stay?)
How to use Firebase UI to control data flow after signing in successfully(redirect or stay?) Yingqi Chen Follow Aug 16 · 3 min read To build a user authentication system, Firebase UI is a brilliant tool! Once you install and configure it properly(both in your code and in Firebase console), you get 11 beautiful and working UI for signing in automatically, for GCP(Google Cloud Platform) users, you would even have two more ways to enable SAML and OIDC providers. Install and configure It is not too hard to start, you can follow the documentation to install, enable the specific provider in Firebase console and configure. And since I am using it for my Gatsby(which is a framework written in React) app, I use this documentation too to set up in React. There are two components that you can use FirebaseUI : FirebaseAuth and StyledFirebaseAuth . The difference is illustrated here in the documentation: FirebaseAuth has a reference to the FirebaseUI CSS file (it requires the CSS). StyledFirebaseAuth is bundled with the CSS directly. I use the latter one. And here is a basic example of a StyledFirebaseAuth from the documentation: // Import FirebaseAuth and firebase. import React from 'react'; import StyledFirebaseAuth from 'react-firebaseui/StyledFirebaseAuth'; import firebase from 'firebase'; // Configure Firebase. const config = { apiKey: 'AIzaSyAeue-AsYu76MMQlTOM-KlbYBlusW9c1FM', authDomain: 'myproject-1234.firebaseapp.com', // ... }; firebase.initializeApp(config); // Configure FirebaseUI. const uiConfig = { // Popup signin flow rather than redirect flow. signInFlow: 'popup', // Redirect to /signedIn after sign in is successful. Alternatively you can provide a callbacks.signInSuccess function. signInSuccessUrl: '/signedIn', // We will display Google and Facebook as auth providers. signInOptions: [ firebase.auth.GoogleAuthProvider.PROVIDER_ID, firebase.auth.FacebookAuthProvider.PROVIDER_ID ] }; class SignInScreen extends React.Component { render() { return ( <div> <h1>My App</h1> <p>Please sign-in:</p> <StyledFirebaseAuth uiConfig={uiConfig} firebaseAuth={firebase.auth()}/> </div> ); } } Callback after signing in The documentation is very clear so I didn’t plan to talk too much about this. What I care the most is the data flow because when I used in a few days ago in my Gatsby app, I didn’t see too much information. So I want to write it down just in case someone might need it in the future. When it is configured, you can sign in with the provider and connect it to the Firestore and create a user in Authentication in Firestore . After that, signInSuccessUrl: ‘/signedIn’ is going to redirect the flow to /signedIn . But what if you want to do more than just redirect? Then you should use a callback called: signInSuccessWithAuthResult . It should be nested in callbacks object in uiConfig . So the example in the documentation looks like this: // Configure FirebaseUI. uiConfig = { // Popup signin flow rather than redirect flow. signInFlow: 'popup', // We will display Google and Facebook as auth providers. signInOptions: [ firebase.auth.GoogleAuthProvider.PROVIDER_ID, firebase.auth.FacebookAuthProvider.PROVIDER_ID ], callbacks: { signInSuccessWithAuthResult: () => false } } Use signInSuccessWithAuthResult callback to decide to redirect or stay The callback will be a function that takes in two parameters, authObject (usually the user )and redirectURL . So you can do more after signing in successfully. What it will return is actually a simple boolean value. If it returns true, then after signing in, user will be redirected to the URL that is specified by signInSuccessUrl . When it returns false, the user will stay on the same page. Take care of async request in signInSuccessWithAuthResult Another problem I have when I play with signInSuccessWithAuthResult is that since it asks for exactly a boolean value, when I use async/await to try to fetch data from a database or an API, I got an error stating that what I return is a Promise<boolean> , which does not match exactly what signInSuccessWithAuthResult expect to receive. To handle this problem, just use a very small trick: use .then to handle the promise inside of the function. It is just a small trick, but I don’t see too much information about it online, so I write it here. Hopefully it will help you some day. Notes Just try to share a note about signInFlow . There are two ways: popup and redirect . It is defaulted to be redirect so if you want to speed up the loading process, specify it to popup ! Thanks for reading!
https://medium.com/dev-genius/how-to-use-firebase-ui-to-control-data-flow-after-signing-in-successfully-redirect-or-stay-10ff238cef70
['Yingqi Chen']
2020-10-31 21:02:46.103000+00:00
['Firebase', 'Firebaseui', 'Signinsuccessurl', 'React', 'JavaScript']
Information Design Round-Up: COVID-19 Edition
Information Design Round-Up: COVID-19 Edition A curation of information design and dataviz that aids in the public understanding of COVID-19 Credit (top-left to bot-right): Reuters, Periscopic, Reuters, Financial Times, New York Times, Pentagram This post is the second in a series dedicated to my favorite information design and data visualization focused on a particular topic, in this case, COVID-19. As in my other posts, “favorites” tend to be examples that demonstrate thoughtful data analysis, engaging visualization methods, a cohesive narrative, and a seamless user experience. My first book Information Design for the Common Good (Bloomsbury), is officially into design and production mode, and I’m thrilled to have time to devote to blogs and articles finally. Today’s post is the second in a series dedicated to favorite information design and data visualization. All posts will focus on a particular topic, in this case, COVID-19. The global coronavirus pandemic has inspired numerous efforts to understand every angle of its prevalence, with that angle shifting over time as we begin to understand parts of the virus. Here are some of the amazing visuals that emerged to aid public understanding of COVID-19. The first set of links is presented in chronological order of publication, demonstrating the evolving understanding of the virus and shifting focus of media narrative. The second grouping of links represents some great overview data examples updated frequently as new information becomes available. A Chronological List of COVID Data Stories 1.Why outbreaks like coronavirus spread exponentially, and how to ‘flatten the curve’ (March 14th, 2020), The Washington Post, Harry Stevens Now the most viewed Washington Post story EVER, Harry Stevens’ piece focuses on understanding what were new concepts to the general public. Flattening the curve, lockdown, social distancing, quarantine, and isolation were all somewhat abstract ideas as countries began to mandate stay-at-home orders and mask-wearing. Stevens effectively shows simulated environments where circles (people) randomly interact with their surroundings without getting into the nitty-gritty of case numbers or specific infection rates. A simple color system shows healthy, infected, and recovered members of different scenarios within the simulated community. Scenarios include: a free-for-all, attempted quarantine, moderate distancing, and extensive distancing. The dots’ interaction is also tracked as a graph in real-time, clearly demonstrating to the viewer the difference in outcomes. As a piece created early in the pandemic, it was precisely the appropriate information to be sharing with an audience trying to understand how their behaviors impact the spread of the virus with very little understanding of the nuances of the virus. The simple simulation showcases important public health concepts, while acknowledging the sizable data gaps around how COVID itself actually spreads and the unknown nuances of the virus. 2. How the Virus Got Out (March 22nd, 2020), The New York Times, Jin Wu, Weiyi Cai, Derek Watkins, and James Glanz A week later, NYTimes published an interactive story that coincidentally also uses dots to show disease spread. As the story states in the beginning, “It seems simple: Stop travel, stop the virus from spreading around the world.” By visualizing how, when, and where people moved geographically, the story explains clearly and visually why travel restrictions had no chance of stopped COVID-19’s spread. By SEEING the large city and transportation hub of Wuhan, China, along with travel volume, you immediately get a sense of how vulnerable the setting is for disease spread. Multi-day incubation periods and high infection rates combined with varying symptom severity (if any) and holiday travel clearly show how the virus could spread quickly. Local outbreaks throughout China and continued international travel helped transport coronavirus to many large cities worldwide, including New York, Sydney, Bangkok, Tokyo, and Seattle. The NYTimes story uses data visuals of travel patterns combined with travel restrictions to show how global governments were usually a step too late in their efforts. As the virus spread, locations continuously stopped travel AFTER the virus had already begun spreading locally. 3.COVID-19 Charts (April 2020), Pentagram, Giorgia Lupi Created in April of 2020, Giorgia Lupi and her team at Pentagram tackled a hypothetical redesign of New York governor Andrew Cuomo’s daily press briefing graphics. The briefings represent the shutdown peak and a desperate need to flatten the curve to ease the strain on the healthcare system. While Governor Cuomo offered detailed numbers on cases, hospitalization, and lives lost, the numbers lacked context and humanization. Lupi’s Summary graphic is particularly useful for combining numbers that previously were presented separately like deaths, hospitalizations, and r-naught. Isolated by individual dataset, it is hard to grasp what exactly the data are telling us. However, shown together, viewers can begin to see connections. Public health and policy graphics are designed to inform the public and possibly influence individual prevention decisions, thus benefiting from a human-centric approach that Lupi champions. 4.How coronavirus hitched a ride through China (April 16th, 2020), Reuters In mid-April of 2020, Reuters published a story that followed COVID-19’s spread around China, covering some of the same data incorporated into the beginning of the NYTimes story mentioned above, but in a completely different way. The story is a thoughtful integration of timeline information, maps, and flowcharts with icons to show COVID-19 spread across people, and vignettes with illustrations of specific people (or characters in the story) traveling the country. Despite the variety of visual forms, the information is easy to follow with a minimalist style and small color palette of black, white, red, and a little tan. The narrative style achieves an impressive level of viewer engagement, understanding, and investment. 5.Tracing COVID-19 (April 28, 2020), Reuters Less than two weeks later, the Reuters graphics team again published a story about the spread of COVID-19, this time focusing on technology as a means to conduct contact tracing. While the idea of tracing people’s whereabouts through an app or mass surveillance can sound quite invasive, the story walks through a visual explanation of exactly how the technology works. After reading the story (with minimal text and excellent use of imagery), you can quickly understand how COVID-19 and future outbreaks can be traced and the difference between using an app and mass surveillance. 6.Virus Mutations Reveal How COVID-19 Really Spread (June 1st, 2020), Scientific American, graphic by Martin Krzywinski Martin Krzywinski created a piece for Scientific American examining the spread of COVID-19 from a genomic sequencing perspective. The data used for the feature are from Nextstrain, a massive collection of openly-shared genetic data from research groups worldwide. We can see patterns and narratives emerge by visualizing these genetic strains, including on the Grand Princess cruise ship outbreak, and those in the United States, Iran, and Italy. Mapping the spread suggests that the virus has displayed minimal mutations. As noted in the article, “Mapping the spread also substantiates actions that could have best mitigated it: faster, wider testing in China; earlier, stricter global travel bans and isolation of infected people; and more immediate social distancing worldwide.” 8.How will we distribute a COVID-19 vaccine? Here’s one potential path. (September 29th, 2020), National Geographic, Diana Marques and Alexander Stegmaier Much buzz has circulated in the media about the global race to develop a COVID-19 vaccine. National Geographic’s feature from September 29th, 2020, gives an overview of where we stand in the vaccine development process. That process starts with SARS and MERS research to help guide scientists, to trials, production, and distribution. The feature also offers expert estimates as to how many dosages would be needed globally and how many people would need the vaccine to slow the virus’s spread. 9.Tracking the White House Coronavirus Outbreak (October 6th, 2020), The New York Times After months of downplaying the virus, misleading the public, and ignoring public health guidelines, President Trump announced that he was positive for COVID-19 on October 2nd. The New York Times story attempts to assemble a timeline of people, places, and events in the days leading up to and following the announcement. With little public transparency from Washington, NYTimes hypothesized who may have been infected — when, and where — based on information gathering. The story effectively takes the breadcrumbs of information that trickled out of the White House and provides chronology and context to the outbreak. 10.COVID-19 Spreading Rates (September 2020), TULP interactive TULP interactive takes a different spin on the global COVID-19 data dashboard by simulating the average rate of newly reported cases from the past week. Users get a sense how quickly the global case count is climbing while staying on the webpage and observing the range of rates from country to country. Intuitive colors aid an overview that transitions from zero cases (shown in gray upon opening the page) to once a case is estimated to be reported (with the country turning red and tallying beginning). Countries with no new cases in the past week remain blue. The use of sound in the form of a chime effect helps the user process the cases increasing in the countries out of view. The timing of this website’s release was also impactful in that the world was already over nine months into the pandemic, with cases persisting. With the global case count on the rise, this site is a reminder that we are still in the thick of the worldwide crisis. Lifelines, by Periscopic 11.Lifelines (October 2020), Periscopic Periscopic consistently produces some of my favorite work — it is unique, engaging, beautifully designed, and brings a true elegance to complex data. Be warned that Lifelines is a particularly powerful piece that explores victims of COVID-19 beyond those directly infected. Deaths of despair from unemployment, isolation, and substance abuse are projected to also have devastating effects in 2020–2029, but those effects are preventable if provided with adequate support structures. Beyond presenting data on past, present, and projected future deaths of despair, Lifelines allows the user to interact by adjusting levels of mental health care, employment status, and social connection. As the interactive sliders are adjusted towards better or worse conditions, photographs of those lost lightly flash in the background as their “orb” sinks below the waterline. While the majority of the piece is emotional, it starts with a clear warning about potentially triggering themes and ends with a number of ways to help prevent tragedy and professional resources. COVID-19: The Global Crisis in Data, by Financial Times 12.COVID-19: The Global Crisis in Data (October 18, 2020), Financial Times The Financial Times has one of the most comprehensive data dashboards that we have seen throughout the pandemic, but their October 18th feature is perhaps a more accessible and narrative version of their work, neatly tying their numerous data-driven angles together. The aforementioned dashboard is more exploratory in nature, demanding a certain level of understanding, or at least interest, in data and drawing your own conclusions. However, the feature story represents an explanatory level of comprehension several months later — never loosing sight of the global death toll, but also presenting the evolving impact that people and governments around the world have experienced. As the Financial Times states, “data has been the only way to truly understand the scale and impact of COVID-19…Individually, each tells a small, yet important, part of the story. Collectively, they help explain the virus’s enormous death toll — and why its impact will last for years to come.” While this specific feature uses data as the dominant story-telling device, it should be noted that this is part two of a six-part series by the Financial Times that helps paint a fuller picture of the complex multi-dimensional crisis. There have been numerous efforts to update COVID-19 case statistics in dashboards and informational resources continuously. These are five of my favorite general COVID-19 resources (in no particular order) that have been updated to reflect the latest available information. Continuously-updated General COVID Resources 1.Where US Coronavirus Cases are Rising and Falling, Reuters The Reuters team created a simple and elegant overview of COVID-19 data for the United States, taking weekly data from March 1st to the present. The maps’ views allow for a clear picture of overall trends and key takeaways with tooltips for detail. The tables with sparklines offer valuable additional contextual data. My favorite section of the page shows weekly tests for 100,000 people, with testing trends and positivity rates as a fascinating extra layer of information. 2.Financial Times Coronavirus Tracker For the data geeks looking to dig into the nitty-gritty of global data, the Financial Times dashboard offers numerous ways to examine COVID-19 data. Users can compare countries or up to six of the United States. Additional options include looking at deaths versus cases, new versus cumulative, raw case numbers versus per million, and more. 3.Johns Hopkins Resource Center Johns Hopkins has a massive Coronavirus Resource Center with many outstanding graphics to help make research data more accessible, all with clear instructions on how to read the graphics. Their page on the impact of opening and closing decisions by the state is particularly interesting. Considering the correlation between policy and COVID-19 cases and deaths gives a greater understanding of social distancing measures. 4.Timeline of WHO COVID-19 Response WHO’s timeline is a seemingly simple, but hugely effective, look at layers of global information spanning back to December 31st of 2019. In one horizontally scrolling timeline, WHO accounts for case counts by worldwide region and type of action taken (categorized as information, science, leadership, advice, response, or resourcing). Key actions are indicated with a star, and all actions link out to official statements, news releases, and guidelines. For users primarily familiar with case data, the timeline’s information gives an important global context. I hope you enjoyed this roundup of COVID-19 information design from around the web. What awesome designs out there did I miss? Share some favorites (from this list or elsewhere) below! Courtney Marchese is an award-winning designer and professor with over a decade of professional experience specializing in data visualization, user experience design, and design research. Her forthcoming book from Bloomsbury Press, Information Design for the Common Good, explores the critical role of information design and data visualization related to the visual explanation of some of today’s most challenging human-centered concerns, including social, political, environmental, and global health issues.
https://medium.com/nightingale/information-design-round-up-covid-19-edition-edc117af32e8
['Courtney Marchese']
2020-11-23 14:03:27.681000+00:00
['Dataviz', 'Covid 19', 'Public Health', 'Data Visualization']
Remaining Technical as an Engineering Leader
The question “how technically hands-on should I be as a team leader?” comes up frequently in my conversations with engineering leads. The short answer is: “definitely to some extent”, but for a more specific answer we need to consider context, your individual team makeup, and your goals as a leader. Remaining technical is essential Let’s begin by acknowledging that engineering leadership is a technical role. You are responsible for supporting software engineers who do technical work, for helping to shape the team’s technical vision, and for making strategic technical decisions. To remain credible and viable as an engineering leader it is essential that you remain technical. Remaining technical means maintaining: A sufficient depth of understanding of the systems your team owns The ability to discuss technical concepts fluently Sound technical judgement to be able to reason about trade-offs, and make good decisions Should I be writing code? For many, remaining technical is synonymous with continuing to write code, and indeed, writing code does help, but the key choice is now what code to write. When you move from a maker’s schedule to a manager’s schedule, the large blocks of time required for deep work, like writing code, become harder to find. Furthermore, when they do appear, the chance of being interrupted remains high. As such, it is generally not possible — nor advisable — to commit to delivering work that is on the critical path of your team’s projects. Attempting to play the roles of software engineer and team leader simultaneously is a recipe for overwork and burnout. The dilemma then is working out how to remain technical as your opportunities for coding become increasingly limited, and the range of coding tasks realistically available to you is narrower compared to when you were in a software engineering role. Photo by Chris Ried on Unsplash Remaining technical away from the critical path Writing code is one of the ways to remain technical, but is not (and shouldn’t be) the only way. Here are some activities available to you that help you remain technical: Read code reviews — set time aside to read the code reviews your team is producing. This helps you maintain awareness of the areas of the code that are changing, spot technical debt, and keep an eye on quality by reviewing tests and reviewer comments. Note that I say read code reviews, rather than participate in code reviews — it’s not that you should never participate, but be aware that being a required reviewer makes you a bottleneck. Left unchecked, I’ve seen this result in all code reviews needing sign-off from the team lead, which can really slow the team down. Participate in design discussions — understand how your systems are changing, ask thoughtful questions to help highlight trade-offs, and help guide the team to make sound technical decisions. Read post-mortems — learn from the challenges other teams have faced, and bring this knowledge back to your team. Invest in automation — look for opportunities to automate part of your team’s workflow. This type of work requires you to first understand the processes you have in place, forcing you to get closer to the day-to-day experience of the engineers of your team. It also typically involves writing some code, maybe a short script. Investigate new technologies — the implementation languages and technologies that we use change over time. For example, your team might be thinking about trying out Docker for local builds, or planning to use Kafka in a future project. Take some time to follow tutorials, and build a working example that you can continue to hack on. This will help you develop a basic understanding of the concepts and terminology of the new domain, and give you a starting point to explore from in the future. Pick up well-defined, well-scoped, lower-priority bug fixes or small features from your team’s backlog — the value of the bug or feature here is inherently low, but that’s not the point. These tasks give you a chance to use the same tools and processes as the team, to experience any pain points or friction first-hand, and to role-model high quality, well-tested pull-requests. It’s important to make it clear to the team why you’re picking up items like this, rather than something of higher priority from the top of the backlog. Try pair programming, favouring the observer role — this is a great way to learn from your team, and means that you can leave at short notice if required. Consume technical content — stay up to date with what is happening in the industry, read articles, watch videos, and attend meetups. Consuming is not the same as producing, but it helps keep your brain in a technical space, and encourages continuous learning. Participate in recruitment — conducting interviews in which you ask technical questions, and assess the code of others, helps to keep your knowledge of computer science fundamentals fresh, and practice thinking algorithmically. Some people might also suggest adding ‘Code in your personal time’ to this list. My view is that by all means do this if it’s something you enjoy and want to do, but don’t feel pressured, or force yourself to do it because of your role. It’s already too easy to take your work home with you as an engineering leader — I don’t recommend adding to this. A common follow-up question after discussing the suggestions above is, “But won’t my coding abilities suffer if I’m not writing any serious, substantial code like I used to?”. The reality is yes, they likely will degrade to some degree — and that’s okay. You are no longer being valued primarily for your technical contributions as a software engineer — your role is now to support the team doing the work. If you choose to pendulum back into a more hands-on software engineering role in the future, it will take you some time to ramp up, but that is to be expected. It’s not realistic to think that, at any moment, someone could switch back to performing their previous job as well as they have ever done, having not done it for some time. Other factors to consider What does the team need? If you’re leading a very small team, or there’s a deadline fast-approaching that requires all hands on deck, your ability to contribute technically and write code might not just be valued, but be required. It’s important to ask yourself: what does the team need from me — is coding the best way I can support my team at the moment? Sometimes the answer can be, yes. What does your manager/department/organisation expect and value?Expectations of how technical leaders should be can vary between organisations, departments, and even individual managers. Subcultures exist, and it’s important to be aware of them. It’s essential that you have a conversation with your manager to ensure you are aligned on these expectations.
https://anothermarkwood.medium.com/remaining-technical-as-an-engineering-leader-7627fe0322c
['Mark Wood']
2020-12-04 15:19:14.545000+00:00
['Engineering', 'Leadership', 'Management']
Today you will finally understand Bitcoin
Bitcoin challenged our core economic assumptions — here is why it is the future. I will walk you through some core fundamentals that govern Bitcoin, without going into any technical details. These are notes I made after a fascinating conversation with an economics major. What is a Bitcoin? Bitcoin is money. If anyone tells you anything else, they most probably don’t understand what they are saying themselves. Bitcoin is money — plain and simple. What is money? Now that we claim Bitcoin is money, let us look at what money itself is. I had until recently believed I understood money. I was wrong. Money is a medium of exchange and a store of value. What is a medium of exchange? When you pay a shopkeeper with a currency note, she accepts it. She trusts that the wholesale shop she buys her stuff from will accept the same currency note. The owner of the wholesale shop accepts the note because she trusts that the bank will take the note and increase the balance she can spend on a website. She then trusts the e-commerce website to accept a transfer of balance from one bank account to another in return for goods. And the cycle goes on. A valid medium of exchange is one which can pass hands without any depreciation in value. A good medium of exchange is also easy to carry out trade with. Early tribes used shells, feathers and pebbles as a medium of exchange between tribes. These payments were made as an obligation or cover for damages caused. What is not a medium of exchange? A critical point in case for a medium of exchange is that value of the medium should not lose value upon change of hands. During the tulip mania, tulip bulbs were exchanged as a tradable item. Bulbs were bought purely on the speculation that the next person would buy it at an equivalent or a higher value. Which is pretty much what we discussed money to be above. But, a tulip doesn’t have consistent value. A bulb over time, if not taken care of loses value — i.e. if it rots, it sheds petals on poor handling. It is hence that a tulip is not a valid medium of exchange. It might be a great trading commodity, but not a good medium of exchange. Boulders are bad mediums of exchange even though they don’t lose value on exchange, but are hard to exchange in the first place. What is a store of value? You lock up money in a bank. $100 kept in a bank is worth $100 at any time. What you can buy with $100 is a different matter, but no one will say that your $100 is now $90 because it is 20 years old. Gold has been long seen as a great store of value because it doesn’t degrade with time. Silver is a slightly less favourable because, over time it reacts with Oxygen in the atmosphere to form a black ugly looking compound called silver oxide. Copper even less so. Copper turns green on oxidation and much faster than silver turning black. The statue of liberty was once brown, being made of copper — and in a couple of centuries turned green. Not so favourable for a store of value. Who would buy a green degraded statue when they can buy a shiny copper one? What is not a store of value? If you own 10kg of rice today, you can go to your neighbour and ask them to give you say 8 kg of wheat in exchange for the rice. Your kind neighbour might agree to do that. However if you went to her and asked her for 8kg of rice in exchange of rice that is 20 years old and fungus all over, odds are far less favourable. If something loses its value over time, it is a bad store of value. Where does Bitcoin stand? As long as the next person takes Bitcoin and gives you a pizza, Bitcoin holds as a valid medium of exchange. It is easy to pass from one holder to another, it is all digital and no bulk. If you can memorize your password (aka private key) you can cross borders with all the money. No international bank transfers, no suitcases filled with bills. It is a store of value — face value of the holdings doesn’t go down with time. Where does US dollar stand? You can give a dollar bill or an online transaction to buy a pizza. It is easy to store dollars in a bank. A dollar bills face value doesn’t change with time. It is all the same. You pay at a shop using currency notes. You buy on Amazon using a credit card. You pay using a Paypal wallet on eBay. You purchase on the Lightning store using Bitcoin. All the same. If there is nothing new, why the hype? There are two fundamental differences between Bitcoin and Fiat currencies like USD. Decentralized and trustless Trustless is a Bitcoin jargon. It essentially means that there is no entity you need to trust. For fiat currency you need to trust the entity in the middle. In the case of currency notes you need to trust the government issuing the currency. Every currency note usually holds a phrase like “I promise to pay the bearer the sum total of 1000 Rupees”, and signed by the head of the Central Bank. If you hold a 1000 Rupees note, you can safely assume that you can buy things worth Rupees 1000 using that. This seems to be a fair assumption, it is fair to put that kind of trust on a government. However, recently India as carried out a demonetization where it rendered all Rs 500 and Rs 1000 notes to be invalid overnight. It did allow people some breathing space to exchange these notes in a bank. If an Indian had been away trekking the Himalayas without connectivity over the past year, she would come back to learn that the Governor of the Reserve Bank of India had broken his promise to “pay the bearer the sum total of 1000 Rupees” — there is nothing the trekker can do about it. She trusted the government, and the government failed her. Bitcoin on the other end doesn’t need you to trust an entity, rather needs you to trust the math. Yes, math. Bitcoin is rule based transactions governed by a branch of computer science and math called cryptography. Anyone can download a software and start validating the transactions that are happening using Bitcoin. The validation is simple. Alice can send Bob 10 Bitcoins only if she owns 10 Bitcoins — the math should add up. Multiple people can run these validations, by running these softwares. For every transaction that happens everyone who is running this software on their computer vote whether a transaction is valid or not. If the majority says it is valid, it is valid. The way the softwares do this voting is structured in a way that liars are severely punished, not by a central authority, but by the mathematical rules. It is hence that Bitcoin is decentralized. Deterministic A central bank mints money. How much money is to be minted is controlled in essence by the head of the Central Bank. The Central Bank pumps more money into the system to combat inflation. It doesn’t usually pump too much money, because doing so will reduce the economic value of the currency and hamper the buying capacity of that currency on the international market. It is thus in the best interest of the government to not mint excessive money. Zimbabwe during 1990s printed a lot of money to fund the Congo War without accounting. That resulted in a loaf of bread costing a truck load of Zimbabwe Dollars. A Zimbabwe citizen cannot question this central authority that minted Dollars rampantly, almost at will — causing the hyper inflation. Zimbabwe had to finally move to USD as the medium of exchange even within Zimbabwe. Zimbabwe Dollars are now worthless. Bitcoin is scarce and will be introduced into the market at a deterministic pre decided rate. There will be a total of 21 Million Bitcoins. All of them will be introduced into the system by the end of year 2134. No inflation adjustments will take place. No entity can destroy the face value of a Bitcoin by over-minting. What is the intrinsic value of Bitcoin? None. There is no doubt about that. If you fly to Mars today, the Bitcoins that you hold on your computer are worthless. In the section “What is money”, we talked about medium of exchange and store of value. It is not a requirement for a medium of exchange or store of value to have intrinsic value. Feathers, shells and pebbles used by early tribals had intrinsic value. They were ornamental. Gold has intrinsic value because it is ornamental. Gold is also used in many sophesticated machinery — but that is not why it is a store of value. If it were the case, it would mean Sillicon would be a super important store of value. Feathers, shells, pebbles and gold are all scarce. Atleast the ones used in trades. The rarity is the cause of their value. And in that respect, Bitcoin is just as scarce. That leaves us with the ornamental value. For which I argue, that the ornamental value is an effect rather than the cause of the value in exchange. Gold is considered particularly beautiful because it is scarce and a convinient way to show off wealth. Wealth is nothing but buying power in trade. Is it okay for a currency to have no intrinsic value? Yes. We have been using a currency that has no intrinsic value for decades. Without most people realizing, the US governement followed by other governments of the world have done away with the the gold standard. You could once take a $100 bill to the govenrment and ask for the equivalent amount of gold. You can’t do that any more. Meaning a currency bill is nothing more than the promise made by the state. Why am I still using dollars to buy my grocery? Bitcoin is not new. It was invented and deployed in 2009. Even after 10 years, we don’t see it as the primary form of currency. Underground activity One of the main critiques of Bitcoins is that it facilitates underground activity. Drugs purchases and terrorist funding have become particularly easy with this new medium — so is the claim. It is true that there have been such cases that have been exposed. If a funder takes a suitcase of dollars to a terrorist firm, there is absolutely no way to trace it back. Bitcoin can be traced back. It is not an anonymous currency. Internet in the early 1990s saw maximum adoption from the Porn industry. There were speculators who said the internet is a bad thing because all it does is encourage pornography. It is a valid claim in the myopic view. But Bitcoin will open up a whole variety of possibilities. Government Regulations Governments have shown varied responses to Bitcoin. South Korea, India and China have been the latest to show opposition to Bitcoin. Bitcoin being a medium of exchange diminishes the buying power of the fiat currencies. A Rupee becomes more valuable if more Rupees are traded on the international market. If Bitcoin becomes a prevelant form of exchange internationally, it will be at the cost of other fiat currencies — atleast in a few areas. We will however see that this is not a zero sum game. Real technical challenges There are some real technical challenges that Bitcoin is facing at the moment. Bitcoin can support only 7 transactions per second whereas Visa processes 45,000 transactions per second! This seems to be a huge difference, but starting Jauary 2018 solutions like Lightning network have been deployed, which will be able to match Visa like processing very soon. The breakthrough is just around the corner. Bitcoin is also facing some challenges maintaining its decentralized nature. Again, work is in progress. Why should I be excited about Bitcoin? Decentralized, trustless, deterministic As we saw earlier, Bitcoin cannot be screwed by a single entity. It is a major improvement over the current form of currency we are using. A modern currency for a modern world A lot of our transactions have now moved online — but the currency still remains the same. There is no way to pay for only 10 minutes of a movie that you watched, or get paid a couple of cents for filling up a form. It is not possible to make a very small payment using USD, because Visa and Paypal charge a transaction fees which make such transactions of a few cents infeasible — you will pay more in transaction fees than the actual value. These are called micropayments. With scaling solutions like Lightning Network on Bitcoin, these micro transactions become very much feasible. It is thus not a zero sum game with fiat currencies, because these are avenues where fiat currencies don’t get traded in the first place. Bitcoin is the first such system that has worked. If the few unanswered questions have found their technical solutions, it will be the biggest thing that would have happened to software since the Internet. How to trade Bitcoin and become rich? I don’t know.
https://medium.com/madhavanmalolan/today-you-will-finally-understand-bitcoin-36fc2782b9cf
['Madhavan Malolan']
2018-03-08 09:39:12.201000+00:00
['Future', 'Bitcoin', 'Money', 'History', 'Government']
World in Crisis, Pandora Still Rocks
With all the bad news lately, I decided it was time to put together an upbeat punk-pop-rock station on [Pandora](http://www.pandora.com) so at least *something* sounded kind of happy, if not peppy-mad. The thing is, I could think of a few bands I wanted to seed this one with, among them, [The Ting Tings](http://www.myspace.com/thetingtings) and [Metric](http://www.myspace.com/metricband)---both good at being mad with an upbeat sound; rebels all. But there was this other band… swedish, I think. I hadn’t listened to them in years and just couldn’t think of the name, even though I knew it was really simple. Well, three songs into listening to my new station who comes up but [the Sounds](http://www.myspace.com/thesounds)! Wow. The very band I couldn’t recall. Pandora borders on psychic when it does stuff like this. By far the best music web site out there, folks. [Tune into my new station](http://www.pandora.com/stations/7f87a543b69d4af4ba73d19387cd3c75c874869f84047ca8) and check it out for yourself!
https://medium.com/minds-on-media/world-in-crisis-pandora-still-rocks-ef8b2689c29d
['R. E. Warner']
2018-01-29 07:01:56.268000+00:00
['AI', 'Metric', 'Pandora', 'Brit Pop', 'Internet Radio']
Getting through a challenging age with data
Following the implosion of the US housing bubble, when the mid-2000s Great Recession began to hit markets worldwide, many enterprises found themselves in dire straits. In the ensuing crisis, enterprises vied for suddenly limited resources amidst a global economic downturn. As consultants, Starschema provided them with data-driven insights into their economic processes that allowed them to identify efficiencies without jeopardizing every enterprise’s most valuable resource: the people that make it great. Over the years, we’ve helped companies, great and small, weather many a storm. From economic downturn to supply chain interruptions, we are no strangers to the challenges many companies are now facing. We’ve walked with our clients through successes and trying times. What we learned from a decade and a half of committed work is a core pillar in the foundation of Starschema: our belief that data has the power to change the world for the better. The current crisis will be no different — it will deeply affect peoples lives and livelihoods. We know from experience that data-driven approaches can make a difference for public health outcomes, and help companies respond to the challenges ahead and navigate though uncharted territory. Business continuity in times of crisis Globalization has created a uniquely interdependent economic system, with long and often obscure supply chains. Many enterprises, especially in the manufacturing sector, expect to face shortages of raw materials and components. Workplace shutdown, mandatory quarantines and border closures are putting a strain on service providers worldwide. Supply chain analytics can help enterprises understand their supply chains better. As providers of sophisticated graph-based analytical solutions we understand your company’s exposure to affected areas and how this cascades throughout your organization. Through mapping your supply chain, you can better understand the effect of various governmental measures and staff outages have on your company. To this end we created a database of curated COVID-19 incidence data. The intention is to ingest reliable data from multiple sources and make it analytics ready so it can be easily accessed and used. A number of leading companies are already using this database in their business continuity analyses. This information can be integrated into a business continuity dashboard and a supply chain status board, which allow companies to make data-driven decisions on whether manufacturing needs to be suspended or diverted. In addition, our expertise in operations research (OR) and optimization algorithms can be used to reschedule jobs to leverage available resources and available production capacity. The range of OR tools and mathematical optimization algorithms used in predictive maintenance and supply chain prioritization can also be invoked to help companies achieve peak productivity even amidst staff absences, raw material and component part shortages and public health limitations. Our work for the common good At Starschema, we believe data belongs to everyone, and should be used for the common good. For this reason, we have joined forces with Snowflake to make available to the public a gold standard data set that collates data on the COVID-19 outbreak — free of charge. This data set was developed with the support of our partners at Tableau, DataBlick and Mapbox, and is now maintained by Starschema’s data engineers and data scientists. It is, of course, available worldwide for public and private use alike under a BSD-3 license. As a long-standing partner of various global organizations like the UN’s World Food Programme, the European Commission and PATH, Starschema continues to work with international aid organizations and NGOs to achieve better health outcomes throughout the planet. Data analytics companies can play a decisive role in the battle against COVID-19, and Starschema is proud to be part of the solution. Open data can be used to better understand both disease dynamics and the human landscape in which it takes place: Alongside data on healthcare capacity, the COVID-19 data set can be used to identify healthcare providers and hospitals that are at risk of reaching or exceeding capacity, allowing non-critical patients to be diverted or transferred before the institution reaches capacity. Retailers can use the COVID-19 data set to proactively respond to increased market demand and identify stores vulnerable to supply shortages. Charities and food banks can identify areas where school closures have led to children depending on school meals being put at risk of malnutrition and direct supplies to the most affected areas. Financial institutions can use the data to build inferences about possible default risks and develop early mitigation strategies to help their customers in this difficult time. As this difficult situation continues to evolve, please know that for the first time in history, a unique effort is being mounted to defeat a pandemic. Alongside the committed experts of public health services worldwide, the doctors and nurses on the front lines of this battle, the contingency planning staffs in all affected countries, civil servants, police and the military, there is, for the first time, a new string in humanity’s bow in the battle against COVID-19. By leveraging all the discoveries and innovations of data science, data engineering and machine learning, we now have a new tool to bring to bear on this challenge. And just as firmly as we believe in data, we believe that the incredible creative genius of humanity, and the right data, will see us through this crisis.
https://medium.com/starschema-blog/getting-through-a-challenging-age-with-data-c1eda5d4da6c
['Tamas Foldi']
2020-03-19 22:56:12.574000+00:00
['Snowflake', 'Covid 19', 'Data Engineering', 'Data Sharing']
How to use Figma: the essential guide
How to use Figma: the essential guide Includes 4 quick tips that you might not know about Photo by Med Badr Chemmaoui on Unsplash What is Figma? Essentially a prototyping tool that is primarily web-based. It can be used as a vector graphics editor too and it has additional offline features but I personally just use it as a convenient web-based prototyping tool. Just: Log on -> Sign In/Up -> Start Designing It’s that easy. As someone without any formal design background, I found this tool extremely easy and convenient to use. In Part 1 of this tutorial, I’ll write about some basic things that I think most people would already know (just to ensure there’s sufficient coverage). In the second part, I’ll list out a few cool (but essential) features of Figma that I only found out after watching some tutorials. Let’s get started! :) Part 1 (The intuitive/basic stuff) Once you log in, you should see this page in your home page with this side menu on the left: Screenshot taken from Author’s personal Figma Account Click on the “+” sign for the “Drafts” option and you’ll be directed to: By Author — this is the space for your prototyping Click on this button to select a frame On the right side of your screen, you’ll see this menu appear: So depending on what UI you’re prototyping for, select the appropriate one Once that is done, the frame will appear like so: Screenshot by Author You can proceed on to change the background colour to whichever colours you want Try exploring adding different shapes and text to your web page by clicking the rectangle icon here: Add a variety of shapes and stuff to your web page design Say we have something like this (two frames, one with a “Click me” button and another empty page): Screenshot by Author What if we want to transition to frame 2 (Desktop — 2) once the button is clicked? Click on the “Prototype” option right under the “Share” button. Then, once your mouse hovers near the “Click me” button, there will be a little circular thing that will appear on the side of the button. Just click that little circle and drag it to your desired frame (Desktop — 2) like so: Screenshot by Author Feel free to play around with the “Interaction Details” if so desired. The cool thing about Figma is that other users can view your prototype and edit it too (for the free tier, only two people can edit this prototype). Simply click on the “Share” on the right side of the page and this will show up: Screenshot by Author Lastly, you can also rename the frames to more appropriate names like so: Before editing the frame’s names After editing the frame’s names That’s about it for the basic part of the tutorial! It is fairly brief and there are various parts to it that aren’t covered. However, it is rather tricky to explain all of its features in a written format so the best way to go about it might be to click around and play with what they have to offer! Part 2 (The slightly more advanced stuff) So this section comprises of mostly points that I formerly did not know about and only learned after watching some YouTube video tutorials. You can add a gradient (or more colour overlays) to a frame or component by adding a new fill. Screenshot by Author 2. You’d probably want to add in an image (png/svg/etc) you’ve downloaded from somewhere in your prototype. Once you have your image in a folder, just click on the image and directly drag it into your desired frame. Screenshot by Author Screenshot by Author — Banner Image Designed by kjpargeter / Freepik With a little adjustment, you’d be able to get your image properly displayed in your frame. 3. Using breakpoints — for demo purposes, I will now overlay a simple rectangle over my frame HomePage. Screenshot by Author If we double click the blue rectangle and then mouse over an edge, we can see the breakpoints appearing: Screenshot by Author Then by clicking on the breakpoints and dragging it into position, you can use it to create interesting shapes and designs. Screenshot by Author 4. Create Component VS Group Selection The difference between the two is, if you create a component and in many different frames reuse that component, if you make a change in one frame, the changes will be automatically replicated in all of your other frames (if they also contain that component). Screenshot by Author For example, it’ll be convenient to have your navBar (it’ll contain texts, logo, etc) as a component so that if you decide to make a change in one, you won’t have to copy and paste it in to the other frames since they’ll be automatically updated for you. Take note that even for the prototype selections it’ll automatically replicated them too. Screenshot by Author If you choose to group a couple of elements instead, changes to one group will not be replicated across all the other frames even if the same group appears in your other frames.
https://medium.com/design-bootcamp/how-to-use-figma-the-essential-guide-90576c20a957
['Ran', 'Reine']
2020-11-03 06:01:25.802000+00:00
['Technology', 'Prototyping', 'UI', 'Software Development', 'Design']
Teaching Customers How To Use Your Product One At A Time: What Works For Tyme Iron Creator Jacynda…
Teaching Customers How To Use Your Product One At A Time: What Works For Tyme Iron Creator Jacynda Smith Tara McMullin Follow Nov 21, 2019 · 4 min read The Nitty-Gritty: How Tyme founder Jacynda Smith manages 100–200 individual consultations with new customers each week Why these personalized consultations help Tyme delight 90% of frustrated customers How virtual styling sessions create a feedback loop that helps Tyme get better & better What Tyme is doing to leverage the success they’ve had with personalized virtual styling sessions “Do things that don’t scale.” That’s the advice that Paul Graham, co-founder at startup accelerator Y Combinator, commonly gives to founders. “Do things that don’t scale” just happens to sound like the opposite of what many digital small business owners fret about when they exclaim, “but that doesn’t scale!” Here’s the thing: if we spend all our time worrying about what does and doesn’t scale, we don’t take the very necessary steps to get to the place where scaling is even an option. Today, we’re examining customer service that might not scale but has helped the company create massive growth. Before we get there, let’s take a closer look at this idea of doing things that don’t scale. In Graham’s article on the concept, he outlines how a number of today’s huge companies did things that didn’t scale to build their footprint. First, companies like Stripe, Airbnb, and even Facebook recruited new customers by hand. The Stripe founders personally set up new users and installed the software on their websites. The Airbnb founders literally went door to door. Facebook famously went from campus to campus signing up new users. Second, founders make deliberate choices to take small actions that build the foundation for their ability to scale up. Graham writes, “the right things often seem both laborious and inconsequential at the time.” The “right things” were actions like the Airbnb founders taking professional photographs of early home listings or Steve Jobs prioritizing the quality of execution of his product from fonts to packaging. Finally, Graham talks about how many successful companies have been built by “over-engaging” with a small group of core users in the beginning. The founders reach out, have one-on-one conversations, and find out how the product is meeting (or not meeting) the user’s needs. It creates a feedback loop that helps the product get better and the company better understand the customer. And that leads us to today’s conversation with Jacynda Smith, the creator of the Tyme Iron. The Tyme Iron is a unique hairstyling tool that’s meant to replace both your flat iron and your curling iron so you can create a variety of styles for medium-length to long hair. When you look at it, you get it. But when you use it? Well, that can be a different story. Faced with questions and even some frustration from new users, Jacynda made an interesting choice. She decided to FaceTime her customers, one at a time, and walk them through the process of creating the style they wanted to create with their new Tyme Iron. In other words, Jacynda made the choice to do something that doesn’t scale. But instead of abandoning that choice as the company grew, she doubled down. As you’ll hear, the company now employs 5 full-time virtual stylists whose job it is to sit down with new customers, one on one, and help them style their hair with their new Tyme Iron. I had to know how this process is managed, plus I wanted to know how investing in this premium customer experience has benefitted the company overall. And that’s what this interview is all about. One last thing though before I introduce Jacynda. Doing things that don’t scale works for any size business. That means it can absolutely work for yours. Now when I bring up doing things that don’t scale, most often I hear “but I don’t have time for that” or “I can’t afford to do that!” And that’s understandable. Many of the big companies I mentioned earlier are willing to take a hit to do things that don’t scale at the beginning. The truth is that taking that hit might work for you, too. But more likely, if you don’t have the time or money to do things that don’t scale, there’s something out of whack with your business model and pricing. So before you give yourself an out and tell yourself that doing things that don’t scale is good for someone like Jacynda but not good for you, take a good long look at the way your business is actually functioning. Now, let’s find out what works for Jacynda Smith!
https://medium.com/help-yourself/teaching-customers-how-to-use-your-product-one-at-a-time-what-works-for-tyme-iron-creator-jacynda-cd27c587357b
['Tara Mcmullin']
2019-11-21 13:21:14.521000+00:00
['Podcast', 'Business', 'Small Business', 'Entrepreneurship', 'Customer Experience']
Why Mastercard Bought a Point-of-Sale Lending Platform
Why Mastercard Bought a Point-of-Sale Lending Platform Mastercard’s acquisition of Vyze opens the door for adoption of POS financing in a way acquirers might not be able to achieve. As banks, merchants, and even tech companies offer consumers new and different ways to pay for stuff, credit card companies are trying to protect their role as the pre-eminent platform for everyday purchases by offering more buy-now-pay-later options. That’s why MasterCard paid an undisclosed amount this week for the point-of-sale loan platform Vyze, which lets consumers take small, short-term loans for everyday purchases without racking up credit card debt. Increasing payment options for consumers buying non-essential items is one way MasterCard hopes it can help merchants finance more sales with less risk, and add a new revenue stream as some consumers forgo credit-card purchases. “By providing additional lenders to merchants we will see increased approval rates, which is great for the merchant, help reduce abandonment at checkout, and for the consumer, it just provides another choice at the point of sale,” Blake Rosenthal, the executive vice president of global acceptance at Mastercard, told Cheddar. Mastercard, which traditionally processes transactions between banks and retailers, makes money by charging consumers interest and retailers a fee for in-store and online purchases. By getting into the point-of-sale lending business with Vyze, Mastercard can be the direct link between merchants and consumers, taking a percentage of the loan for individual transactions. It essentially allows MasterCard to facilitate loans directly between consumer and retailer. “We offer you access, and now your relationship is with that lender,” Rosenthal said. “So what we’ve done is facilitated that matchmaking process, so the lender will email you terms and conditions and information about repayment.” Vyze touts at least a dozen lenders on its platform, including TD Bank and the online lender Avant, allowing customers access to multiple credit options for a purchase with one, streamlined application. “We have the opportunity to put our lenders in whatever position we want,” Rosenthal said. “The ideal situation is a lender in the primary position, in the secondary position, and so forth. We also have the opportunity of doing a round robin. There are a lot of different ways that we can present different lenders in the platform.” Point-of-sales loans haven’t exactly taken off with consumers in the U.S., where debit and credit card purchases still account for nearly half of all transactions. Paying with plastic is easy, and there are plenty of incentives from credit card companies. (In a separate announcement this week, Mastercard introduced new “always-on” benefits to its World and World Elite cardholders that include extra perks when they spend with Lyft, Fandango, Boxed, and Postmates.) Alternative payment options like Apple Pay, which started in 2014, and Google Wallet, in 2011, have been slow to take off. Only 24 percent of iPhone users in the U.S. use Apple’s electronic payment system, according to research by Loup Ventures. But the drum beat for POS loans has been growing louder for the last two years as merchants seek ways to increase sales, and consumers demand more transparency from credit providers. According to Affirm, the leading POS lender, merchants see a 20 percent conversion lift and an 87 percent increase in average order value when customers use its payment option. A study by Forrester reported a 32 percent increase in sales for companies offering POS financing. Earlier this year, Affirm solidified its partnership with Walmart to provide POS loans to its shoppers, and soon after the announcement raised $316 million, bringing its total funding to more than $800 million. Chase, the largest U.S. bank by assets, has also revealed its own POS lending product, and Square said it would soon pursue its own POS product. Klarna, a Swedish company that received a $20 million investment last year from H&M, plans to expand its POS offerings in the U.S. and has been pushing a flashy ad campaign featuring Snoop Dogg to tease them. POS products are a way for the young fintech industry to lure millennial customers that the companies believe are distrustful of the traditional banking system, tired of hidden fees, and averse to taking on more debt. Though the POS lending startups offer more transparent loans, they’re still offering debt and risk — it may take time for consumers to grow comfortable with this new payment-financing option. By getting into the POS financing game now with Vyze, Mastercard could bolster confidence, drive more customer adoption, and increase profits more than an acquiring bank could achieve individually.
https://medium.com/cheddar/why-mastercard-bought-a-point-of-sale-lending-platform-a23f689d93c7
['Tanaya Macheel']
2019-04-17 18:55:03.345000+00:00
['Fintech', 'Startup', 'Business', 'Technology']
Simple and Multiple Linear Regression in Python
To begin modelling, the first step is to import the required libraries and most importantly the dataset. The drinks dataset has been used for this tutorial which can be found here. The goal is to analyze and predict alcohol consumption using the features present in the dataset. Now, that the dataset has been loaded it is time to take a look at it to understand the variables, data types and the overall structure. It is important to know the variables present in the dataset, hence, the head() method is applied to check the columns. One can see that the data set is a mixture of text and numbers, therefore, the dtypes method is applied to make sure that the data types are defined appropriately. In the output, it can be seen that all the numerical columns are either of data type integer or float and the textual values are of type object. Thus, there is no need for converting the data types. Let’s dig into the relationship between the different types of alcohol per continent to understand the data further. Since the variable continent is categorical, the groupby() method is applied by appending the sum() at the end to find the total distribution of alcohol per continent. It is seen that beer_servings are the highest and that Europe has the maximum beer_servings. To understand it’s distribution in detail apply the describe() method which will show a statistical summary. Before modelling Multiple Linear Regression, let’s understand and apply Simple Linear Regression(SLR). This is the most basic form of regression modelling and it assists one to understand the relation between two variables, viz; the dependent/target variable(y) and the independent/predictor variable(x). Following is the equation for SLR: y = bo + b1x, wherein bo is the intercept and b1 is the slope. Python easily calculates these values. To apply SLR, one independent and one dependent variable have to be selected. Since the goal is to predict alcohol consumption total_litres_of_pure_alcohol is chosen as the target variable. There are three independent variables and for SLR only one independent variable is needed. The solution to this problem is to check the correlation of the three independent variables with the dependent variable. This can be done by using the pairplot() method from the seaborn library. Since beer_servings and total_litres_of_pure_alcohol show the best correlation, beer_servings is selected as the independent variable. Modelling Simple Linear Regression: Firstly, import the linear model from scikit-learn. Secondly, create a Linear Regression object using the constructor. Thirdly, define the predictor and target variable. Fourthly, use the fit() method to fit the model. To find bo apply the intercept method and to find b1 apply the coef method. Therefore the Simple Linear Regression Equation is as follows — total_litres_of_pure_acohol = 1.40 + 0.031*beer_servings Finally, use the predict() method to predict the total_litres_of_pure_alcohol. In this case, only the first five predicted values have been displayed. Now that alcohol consumption can be predicted, the company can determine its production level and the quantity that needs to be distributed to avoid wastage and achieve maximum profits. However, the model needs to be evaluated to check its goodness. This can be done either numerically or visually. Let’s evaluate the model numerically using R squared, which is nothing but a measure of how close the data points fit the regression line. An R squared value lies between 0 to 1. A model is considered good enough if the R squared value nears 1. Simply using the score() method gives the R Squared value. The R squared value is close to one, hence, it can be said that the model is a good fit. Moreover, R squared also tell the percentage of variation in the dependent variable that is accounted for by its regression on the independent variable. Hence, in this case, it can be said that approximately 70% of the total variation in total_litres_of_pure_alcohol is explained by beer_servings. Another way to evaluate is by checking the residual plot. To view a residual plot simply apply the residplot() method of searborn and pass in the values. The residual plot displays that the data points are randomly spread around the x-axis. Thus the model is appropriate. However, if there was curvature in the plot that would mean the model is not a good one. Modelling Multiple Linear Regression: Multiple Linear Regression(MLR) is a prediction technique. It provides an explanation to understand the relationship between one continuous dependent/target variable(y) and two or more independent/predictor variables(x). Following is the MLR equation — y = bo + b1x1 + b2x2 + .. + bnxn, wherein bo is the intercept, b1 is the coefficient of x1, similarly b2 is the coefficient of x2 and so on. For the model to perform better on unseen or new data the data has been split into training and testing data. To do so import the train_test_split from sklearn. Also define the variables, in this case, we have multiple variables for independent/predictor variables. z: independent variables y: dependent variable z_train, y_train: parts of the available data as the training set z_test, y_test: parts of available data as the testing set test_size: percentage of data for testing(30% in this case) Similar to SLR, create the Linear Regression object using the constructor and fit the model using the fit method. However, the model will be fit on the training set. Additionally, calculate the intercept and coefficients too using the intercept_ and coeff_ methods. Therefore the Multiple Linear Regression Equation is as follows — total_litres_of_pure_alcohol = 0.48 + 0.017*wine_servings + 0.018*beer_servings + 0.017 * spirit_servings Now that the model is set up let’s predict the total_litres_of_pure_alcohol using predict() method on the test subset. Now let’s evaluate the MLR model on the test subset with the help of R squared which can be calculated using the score() method. The R squared value is close to one, hence, it can be said that the model is a good fit. In this case, 75% of the variation in total_litres_of_pure_alcohol is explained by the independent variables. To visually evaluate the model let’s create a distribution plot. A distribution plot counts the predicted value vs the actual values. Use the distplot() method of the seaborn library to create a distribution plot. Pass the actual value(total_litres_of_pure_alcohol) as the parameter and set hist=False since a histogram is not desired. Moreover, the colour and the label can set as well. The predicted value is passed into the second plot which is ‘a’ in this case and ax=ax1 displays the plot on ax1. It is seen that the predicted values fit closely to the actual values, thus, MLR is a good model. In conclusion, the prediction for alcohol consumption has been performed using SLR and MLR. Upon evaluation, it is learnt that MLR is the better model as it has a higher R squared value. With the predicted values the company can focus on certain continents and countries to increase its profitability.
https://medium.com/swlh/simple-and-multiple-linear-regression-in-python-70dd09f9acbf
['Minesh Barot']
2020-04-24 03:56:51.500000+00:00
['Data Science', 'Linear Regression', 'Data Visualization', 'Python', 'Predictions']
Gray Wanderers
In stories, stars will light the path to victory, grief’s aftermath if only you would crane your head and let your stubborn self be led where Heaven wills — follow that trail and you are destined ne’er to fail nor fall: your spirit guarded still should all of Hell besiege your will. So says the myth — but where is fact? for those who linger, still intact but bleeding out, adrift in gray, no stellar beacons point their way. Are they to drift forever on until all hope of light is gone?
https://zachjpayne.medium.com/gray-wanderers-ec51f19efcf8
['Zach J. Payne']
2019-01-06 22:03:37.378000+00:00
['Grief', 'Winter', 'Depression', 'Creativity', 'Poetry']
Archetypal nature of exceptional design
Archetype In the above picture, you’re seeing a shape reminiscing a rose. Its form is so distinctive that your brain receiving this stimulus immediately creates a percept of a rose. You’re subconsciously experiencing archetype. ἀρχέτυπον — Greek “archétypon” — original pattern from which copies are made. Philosophers also translate this notion as ‘essence’. That ‘essence’ is what we are far more able to discern than to articulate. A part of a product deciding not only on its commercial success but firstly on its relevance. Panton Chair, iPod, Glass Coca-Cola Bottle, 302 Telephone, Porsche 911, Braun Sextant Razor. They resonate with sincere essentialism. This phenomenon was once very honestly described by an American cognitive linguist George Lakoff: “make the thing what it is, and without which it would be not that kind of thing”. This reduction of what’s pointless and simultaneous emphasis on the object’s core extracts substance of which its essence is made. What is the substance of archetype? Intention. An investigation into a purpose. Why is it like this and not like that? What problem are we trying to solve? Who are we going to embrace? A vision driving the creative process and firmly engraving particular feelings that are going to be subconsciously discerned by people. A way of how we frame the problem is a fundamental factor of every following stage of the creative process and its final consequence. An investigation into a purpose. Why is it like this and not like that? What problem are we trying to solve? Who are we going to embrace? A vision driving the creative process and firmly engraving particular feelings that are going to be subconsciously discerned by people. A way of how we frame the problem is a fundamental factor of every following stage of the creative process and its final consequence. Function. A method for an intention. A framework to make even the most complicated activity obvious, natural, unobtrusive and deferential or even enjoyable and delightful. The most favorable is the one utterly blended into its context, requiring no explanation. A method for an intention. A framework to make even the most complicated activity obvious, natural, unobtrusive and deferential or even enjoyable and delightful. The most favorable is the one utterly blended into its context, requiring no explanation. Beauty. A sincere balance between function and aesthetics, an embodiment for an intention. Derived from finding excellence in quality and selection of assets. An enchantment. Design is a process of crystallizing intention into a form and function. If successful, then it results in the birth of an archetype — highly memorable thereby timeless object. A design can’t be good or bad. A designer accomplished the design process successfully or unsuccessfully. Something became better or irrelevant. Someone’s problem was solved or new problems were created. This nuance was once aptly articulated by broadly admired Antoine de Saint-Exupéry: “A designer knows he has achieved perfection not when there is nothing left to add, but when there is nothing left to take away.” The successful design speaks for itself. It doesn’t need clarification. The masterpiece “The Passion of Joan of Arc” a story about the trial, produced in the era of silent film still dazzles after over 90 years. Its narration based on extraordinary frames and unprecedented acting remarkably communicates the meaning of scenes. Likewise, design doesn’t use words to tell stories. Furthermore, it provides confusion or even frustration when it’s incapable of wordless communication. Design is supposed to be obvious yet compelling. Archetypes are what they are because they require no explanation. They silently manifest their idiosyncratic meanings. Archetypes tend to be design icons. Knowing that, we designers are oftern tempted to create for our individual pride, for vanity. It’s our responsibility to use tools and skills that we were given, to create relevant things, to take care of as societies as individualities, to create something better for everyone. Another threat which uses to haunt us is inadequate comprehension of contrast between “different” and “better”. Instead of questioning “is it good enough?” we rest on laurels without being sufficiently inquisitive. We let us being poisoned by cynicism and seduced by stereotypes — archetypes antagonists. Stereotype At the beginning of this article, you saw a pink shape reminiscing a rose. Even though a commonly know poem says “roses are red”, we know that many species of rose exist, and some of them are decisively not red. That mentioned poem uses a stereotype. So do we designers. Too often we’re constrained by dogma, best practices, unceasing hunt for inspiration by mimicking others’ work instead of stepping into inventor’s shoes. The year is 2007, Nokia dominates owning 49.4% of phones market share. Enthralled by its past success, Nokia becomes a victim of its myopia losing 90% of market value over just six years. New competitors had emerged and challenged the status quo of the phone, its stereotype. While the core purpose is still on its place, the phone’s form, aesthetics, and functionalities got changed. The plastic keyboard was taken over by a widescreen display. Binary interactions gave a way to gestures. Software once passive, became a flexible medium between humans and technology. The fundamental intention behind the phone remained the same: it provides connectivity, even though it not necessarily means a phone call. The new embodiment inherited its purpose but got unleashed due to its new interpretation of an original archetype. We designers lose our potential by making things only different. We should always question ourselves and others by asking “is it good enough?”. Our goal is to strive for better, even though it means facing a tremendous challenge. Our responsibility is to push things forward to simultaneously embrace individuals and societies, nowadays and for generations to come. No matter what century you currently live in, this is time to explore, time to be inquisitive, time to have a ferocious appetite for discovering and creating new ἀρχέτυποι.
https://medium.com/the-supersymmetry/archetypal-nature-of-exceptional-design-579f81db154a
['Radek Szczygiel']
2020-11-21 13:37:03.473000+00:00
['Design', 'Business', 'Product', 'Product Design', 'Management']
What I Learned Wearing the Wrong Size Pants for Two Weeks
Back in January, when we were still free to move about the country, I took a trip to Key West for a music festival. Traveling by air is a dicey proposition for me. Like a lot of people, the changes in altitude, cabinet pressure, and the two drinks it takes for me to not be a crazy person on a plane have side effects. Namely, I blow up like a tick. Good times. I had a three-hour drive from the airport in Miami to Key West and once I got off the plane, it became apparent that there was no way on God’s green Earth that I was going to make that drive comfortably in the pants I was wearing. Not if I wanted to keep breathing. Mind you, I was still in that post-Christmas cookie phase where skinny jeans were most likely not a good idea to begin with. What I needed were some fat pants. I did what any woman in search of comfort would do, I strolled right into the nearest Miami Old Navy and bought a pair of cheap pants a size up from what I was wearing. The kind with a little stretch. I could now spend the rest of my vacation in comfort. Winning. I have spent most of the pandemic in sunny Arizona where I could get away with wearing flowy dresses for months on end, effectively staving off the need to push my legs through anything more cumbersome than yoga pants. But, it’s chilly now and that push has come to shove. Quarantine was all fun and games until we had to put on pants. That ended the party right quick. Heading out for a few days of relaxation in Colorado after the election, I drove for 8 hours in those Miami-bought “fat jeans” and I thought they were going to sever me in half. An hour in, I had them unbuttoned. If I left the house during my vacation, I fashioned a hair tie into a button extender like I was trying to hide five months of pregnancy. I wish I was kidding. Any woman who has ever felt getting into jeans would be easier with Crisco and a shoe horn knows the damage to one’s psyche this does, especially when those jeans were already bought a size larger than we normally wear. In a nutshell, I felt really, horribly fat. The kind of fat that creates moments of desperation where we Google how much liposuction costs and what the downtime is. When I got home, I vowed to correct the direction of pandemic pounds and get my growing ass in gear. Still, there was this looming issue of declining temperatures and a lack of comfortability. I was left with two choices: continue to shove myself in jeans that didn’t fit or succumb to buying just one pair that was a size larger. Again. It’s 2020. We’re living in a dumpster fire. The fact that I’m not in a room with padded walls sometimes feels like accomplishment enough. It’s totally normal to assume that maybe our fat jeans might turn into our skinny jeans. We change shape with none of the superpowers of being an actual Transformer. I cut myself some slack and succumbed to once again venturing to Old Navy in search of cheap jeans. Cheap is a selling point when you convince yourself your size is temporary. I searched my closet for those stupid jeans I had been wearing so I knew what the next size up would be. Then I laughed. Hard. I had been walking around in the jeans I discarded in January, not the new ones I bought in Miami. I had been trying to squeeze into a pair of jeans that should only be attempted after spending exactly 839 hours on an elliptical machine. Game. Changer. Digging through my closet, I found the Miami jeans and tried them on. They fit. Perfectly. I looked up the measurements of the jeans that did fit versus the ones that didn’t. I don’t know why I did this. I know my measurements and maybe I was trying to justify my skinny jeans not fitting. It worked. I had been trying to squeeze 153 pounds of 46-year-old woman into jeans with a 28-inch waistband. Who the hell has a 28-inch waistband? And who makes a size 8 with a 28-inch waistband? I ordered two more pairs of size 10 jeans while contemplating what I would say in my strongly worded letter to Those In Charge of Sizing at Old Navy. When the jeans arrived, I tried them on and stood in my bedroom in front of a full-length mirror. I felt like a completely different woman. Two weeks before, I felt fat. I pinched my muffin top. I tried to pull my pants higher to hide it. Not a good look, by the way. Always choose muffin top over camel toe. Always. I hated my body for two whole weeks knowing I should be more accepting of myself. I knew this year had been tough and gaining a couple of pounds should not tailspin me into body shaming self-loathing. But it did. Standing there in the right size jeans, I felt good. I felt sexy. I texted my boyfriend to tell him how cute my butt looked. Moreover, I was not going to pass out or throw up from restricted air and blood flow. I suddenly loved the look of my curves, the shape of quads I earned doing squat reps with over 200 pounds, and the way my legs still looked long. I had convinced myself that if I saw myself on the street, I would be jealous of me. That might have been too much. Our size is a damn number and, clearly, an arbitrary one at that. Vanity sizing has ruined our perception. Media has skewed our ideals. Inconsistency has created confusion. The need to shrink and shrink and shrink had made us routinely feel not good enough. The TL;DR? Forget the friggin’ label and wear clothes that fit. Shame should not come tucked quietly into the pocket of our jeans. No one should ever have to sacrifice comfort in order to feel smaller. That’s insanity. How sexy we are or feel should not depend on a random number on a label. It should come from owning our bodies and feeling confident in them. No one is privy to what the size of our clothes is. No one knows but us so we are in charge of our perception of that number. It shouldn’t matter if the dress we’re wearing is a 6, a 12, or a 22. There is nothing sexy about being uncomfortable. There is nothing liberating about have our skin pinched and having to suck anything in. That’s a hellscape. Where we are right now is our spot. I want to own that. I want to stop comparing myself to other women, advertisements, media expectations, and crazy-ass narratives that I don’t find useful. I want to be comfortable inside and out. What would really help is if stores would employ a woman that just eye-balls our bodies and picks out things without telling us the size. Just hands us clothes that fit us nicely and lets us be none the wiser. I’m adding that to my letter to Old Navy.
https://medium.com/fearless-she-wrote/what-i-learned-wearing-the-wrong-size-pants-for-two-weeks-c8e7d868a3d6
['Vanessa Torre']
2020-12-16 15:46:06.866000+00:00
['Society', 'Fashion', 'Feminism', 'Women', 'Body Image']
Managing Instance Attributes in Python
Often times in the implementation of a python class, it would be useful if we could easily add extra processing to getting and/or setting an instance attribute. For example, it would be useful to be able to perform type checking or some other form of validating during getting/setting instance attributes. In this post, we will discuss how to manage instance attributes in python. Let’s get started! Suppose we have a class called ‘Cars’ with an attribute called ‘car_brand’: class Car: def __init__(self, car_brand): self.car_brand = car_brand Let’s initialize a car class instance as a ‘Tesla’: car1 = Car('Tesla') print("Car brand:", car1.car_brand) While this works fine, if we initialize an instance with a bad input value for ‘car_brand’, there is no data validation or type checking. For example, if we initialize a car instance with a car brand value of the number 500: car2 = Car(500) print("Car brand:", car2.car_brand) we should have a way of validating the type of this instance attribute. We can customize access to attributes by defining the ‘car_brand’ attribute as a ‘property’: class Car: ... @property def car_brand(self): return self._car_brand Defining ‘car_brand’ as a ‘property’ allows us to attach setter and deleter functions to our ‘car_brand’ property. Let’s add a setter method to our ‘car_brand’ attribute that raises an error if the instance is initialized with a value that is not a string: ... #setter function def car_brand(self, value): if not isinstance(value, str): raise TypeError('Expected a string') self._car_brand = value class Car:...#setter function @car_brand .setterdef car_brand(self, value):if not isinstance(value, str):raise TypeError('Expected a string')self._car_brand = value Let’s define our instance again, with our integer input 500: car2 = Car(500) print("Car brand:", car2.car_brand) Another instance management operation to consider is the deletion of instance attributes. If we look at our initial instance: car1 = Car('Tesla') print("Car brand:", car1.car_brand) We can easily delete the attribute value: car1 = Car('Tesla') print("Car brand:", car1.car_brand) del car1.car_brand print("Car brand:", car1.car_brand) We can add a deleter function that raises an error upon attempted deletion: ... #deleter function def car_brand(self): raise AttributeError("Can't delete attribute") class Car:...#deleter function @car_brand .deleterdef car_brand(self):raise AttributeError("Can't delete attribute") Let’s try to set and delete the attribute value once again: car1 = Car('Tesla') print("Car brand:", car1.car_brand) del car1.car_brand I’ll stop here but feel free to play around with the code yourself. CONCLUSIONS To summarize, in this post we discussed how to manage instance attributes in python classes. We showed that by defining a class attribute as a ‘property’ we can attach setter and deleter functions that help us manage the access of attributes. I hope you found this post useful/interesting. The code from this post is available on GitHub. Thank you for reading!
https://towardsdatascience.com/managing-instance-attributes-in-python-b272fe23fd50
['Sadrach Pierre']
2020-05-10 21:43:19.420000+00:00
['Programming', 'Technology', 'Data Science', 'Python', 'Software Development']
Programming Linear Algebra in Java: Vector Operations
Two helper methods we need to discuss before we can do anything too linear algebra-y are isZero() , which returns true if all entries are 0 and false if there are any non-zero entries in the vector. An isZero method is important because some vector operations––like normalizing––will result in division by zero if we don’t check this first. Some operations are perfectly valid with the zero vector, so this check is only done under specific circumstances. Second, we have checkLengths(Vector u, Vector v) . All of the important vector operations––for instance, addition or dot products––are defined only for vectors u and v that have the same number of elements. checklengths compares the two vectors. If they have the same length, nothing happens. If they have different lengths, an IllegalArgumentException is thrown: public static void checkLengths(Vector u1, Vector u2) { if (u1.length() != u2.length()) { throw new IllegalArgumentException( "Vectors are different lengths"); } } Basic Operations Vectors are famous having two primary operations: scalar multiplication (multiplying each entry by the same scalar) and vector addition (adding the corresponding entries of two vectors), so our Vector class would be remiss not to support these. For each, I’ve created a static method that can be called directly from the class, and then an instance method that can be called directly on the Vector itself. Note that all of these operations create a new Vector object, rather than modifying the existing one. public Vector add(Vector u) { return Vector.sum(this, u); } public static Vector sum(Vector u1, Vector u2) { Vector.checkLengths(u1, u2); // ** see comment double[] sums = new double[u1.length()]; for (int i = 0; i < sums.length; i++) { sums[i] = u1.get(i) + u2.get(i); } return new Vector(sums); } The static method sum does all of the work, and the instance method add simply passes this , which refers to the calling object, and u , a passed vector, to sum . Pythonistas will recognize this as the Java cousin of self . There are some important differences between the two, like how instance methods take self as an argument on Python but simply require us to drop static in Java, but if you understand one, it’s easy to wrap your brain around the other. I draw attention to this in order to show the contrast between calling a method on an object itself ( this.method() ) and calling the Vector class’ static method with Vector.sum() . Also notice that in Vector.sum , the very first thing we must do is check to see if the vectors are the same length by calling Vector.checkLengths(u1, u2) . Because we’re adding the first element of u1 onto the first element of u2 , then their second elements, then their third elements, and so on, it’s important that they actually have the same number of elements. There’s actually a bit of redundancy built into this code––because checkLengths is inherently a method of the Vector class, we could have simply written the line marked with // ** see comment as checkLengths(u1, u2); To make it clear that we’re not calling any method that specifically requires an instance’s data, I’ve written it explicitly as Vector.checkLengths(u1, u2); Because this method simply performs a check and throws an IllegalArgumentException if the condition is not met, we don’t have any return value, and we can think of it almost as an assert statement. We can take a similar approach with scalar multiplication: public Vector multiply(double scalar) { return Vector.product(this, scalar); } public static Vector product(Vector u, double scalar) { double[] products = new double[u.length()]; for (int i = 0; i < products.length; i++) { products[i] = scalar * u.get(i); } return new Vector(products); } As well as dot products: public double dot(Vector u) { return Vector.dotProduct(this, u); } public static double dotProduct(Vector u1, Vector u2) { Vector.checkLengths(u1, u2); double sum = 0; for (int i = 0; i < u1.length(); i++) { sum += (u1.get(i) * u2.get(i)); } return sum; } We can apply this same logic to cross products, but we need to make sure the cross product is actually defined first––that means verifying that both vectors are actually of length 3: public Vector cross(Vector u) { return Vector.crossProduct(this, u); } public static Vector crossProduct(Vector a, Vector b) { // check to make sure both vectors are the right length if (a.length() != 3) { throw new IllegalArgumentException("Invalid vector length (first vector)"); } if (a.length() != 3) { throw new IllegalArgumentException("Invalid vector length (second vector)"); } Vector.checkLengths(a, b); // just in case double[] entries = new double[] { a.v[1] * b.v[2] - a.v[2] * b.v[1], a.v[2] * b.v[0] - a.v[0] * b.v[2], a.v[0] * b.v[1] - a.v[1] * b.v[0]}; return new Vector(entries); } I’ve accessed the elements of the arrays with slightly different syntaxes in these operations, both to make the cross product more readable and to highlight a feature of Java that can be confusing the first time you come across it. Notice how in dotProduct , I called the instance method u1.get(i) in order to access elements of the array, while in crossProduct , we accessed the elements directly with a.v[0] . In Java, any code in the Vector class has access to the private members of any Vector object, but we can also call those members’ instance methods. Direction and Magnitude Often, we want to know the magnitude (length) of a vector, which is really a special case of the p-norm where p=2. Since L1 and L2 norms come up in machine learning contexts (for instance, Lasso and Ridge regression), let’s go ahead and make a generalized function for computing the p-norm, given a vector and a value of p: // static method public static double pnorm(Vector u, double p) { if (p < 1) { throw new IllegalArgumentException("p must be >= 1"); } double sum = 0; for (int i = 0; i < u.length(); i++) { sum += Math.pow(Math.abs(u.get(i)), p); } return Math.pow(sum, 1/p); } // instance method public double pnorm(double p) { return Vector.pnorm(this, p); } // magnitude public double magnitude() { return Vector.pnorm(this, 2); } Having both a static method and an instance method for pnorm gives us a little bit of flexibility in how we compute the norm of a vector. We can call the method either from the Vector class itself (e.g. Vector.pnorm(u, 2) ), or we can call it directly on an existing Vector object (e.g. u.pnorm(2) ). Once we’ve defined the pnorm method, we can wrap it up in the magnitude method, to give us the magnitude of a vector. And now that we can compute that, we can normalize a vector. Normalizing is accomplished by dividing the entries of the vector by the vector’s magnitude, which is why we need the isZero check before we proceed: public static Vector normalize(Vector v) { if (v.isZero()) { throw new IllegalArgumentException(); } else { return v.multiply(1.0/v.magnitude()); } } public Vector normalize() { return Vector.normalize(this); } Notice the difference in how we’ve handled a zero Vector and Vector objects of different lengths––any time we attempt an operation on Vector s with different lengths, we’re entering undefined territory, and we need to throw an IllegalArgumentException to abort, but there are some operations (such as pnorm ) that are perfectly valid for all-zero entries, so we wouldn’t want to build the exception into isZero() . More Complex Operations: Enclosed Angles and Scalar Triple Products Several operations rely on dot products, cross products, and magnitude calculations, and we can now perform using the methods we’ve built. First, we want to be able to compute the angle enclosed by two Vector s, which requires the dotProduct method: public static double angleRadians(Vector u1, Vector u2) { Vector.checkLengths(u1, u2); return Math.acos(Vector.dotProduct(u1, u2) / (u1.magnitude() * u2.magnitude())); } And next, we want to be able to compute the , scalar triple product (which requires both the dotProduct and the crossProduct methods: public static double scalarTripleProduct(Vector a, Vector b, Vector c) { return Vector.dotProduct(a, Vector.crossProduct(b, c)); } If you’ve ever had to do this by hand (for instance, on a linear algebra exam), you can probably understand how satisfying it was to watch these pieces fall into place with so little effort.
https://medium.com/swlh/programming-linear-algebra-in-java-vector-operations-6ba08fdd5a1a
['Dan Hales']
2020-11-01 22:49:00.075000+00:00
['Mathematics', 'Linear Algebra', 'Programming', 'Data Science', 'Java']
These Rappers Choose With Their Wallets Not Their Values
These Rappers Choose With Their Wallets Not Their Values If They Ever Had Values In The First Place Photo by Vidar Nordli-Mathisen on Unsplash The last week or two has been chock-full of rappers showing their priorities to their fans. As the Election Day approaches, quite a few rappers—Ice Cube, 50 Cent (though he walked it back—too late), Lil Wayne, Lil Pump, have chosen to endorse Trump. While their fans are horrified, this is not too surprising. A wealthy celebrity is like any other wealthy individual—they want to keep as much of their money as possible. To hell with the rest of the world. It’s a Problem…Till You’re The Problem Same as those who rave that all must be held accountable, saying the elite shouldn’t hoard money and that, overall, through tax breaks and loopholes they pay far less in taxes than someone working a 40 hour work week sounds wonderful. But when it hits too close to home, look how quickly they try to tell masses on social media that sh*t is gold. Cash rules everything around me. Cream. Get the money. —Method Man from C.R.E.A.M by Wu-tang Clan No, Method Man is not one of Trump’s supporters (as far as I know), but these lyrics illustrate the mentality of far too many individuals who reside within a higher income bracket. The wealthy are constantly given concessions and perks, from free products to tax breaks but still want more. Taxes Photo by Viacheslav Bublyk on Unsplash What’s got them riled is that the Biden tax plan would make them actually pay more. A person making over $400,000 anually would be taxed 39.6%. About 1% of the population in the U.S. makes over $500,000 and most of them get that amount from keeping wages low, laying off/firing workers when the business is not making their projected profits, etc. $400,000 is closer to the one percent than they are to being destitute. Hell, many people I know could survive on 100k a year easy. They make what they do by keeping us down, just look at Bezos and all the other elite. Biden’s tax plan would benefit people and the economy, but the elite don’t want everyone to benefit. Who would they look down on? They need someone to suffer so they can feel good. Honestly, rappers like Ice Cube, 50 Cent, Lil Wayne, Lil Pump and Kanye are no different. They need to feel like gods. Trump’s tax plan would give more tax cuts to the wealthy and corporations and that’s the biggest reason they’re endorsing him. They already pay less than most of the population but they’re greedy and want more. That’s it. No different than a lot of the elite who prefer Trump. They’re gonna ride his train and it will run over everyone else—including poorer Trump supporters who’s hate for non-white people and their longing for the “good ol’ days” makes them uninterested in anything else, except seeing Black people and people of color suffer. When we talk paying less and more in taxes, use the rule you damn sure should be applying for celebrity and elite donations: percentages. Who gives a damn if a celeb donates a million dollars when they’re annual income is 50 million. Unlearn feeling grateful for scraps thrown on the floor. The wealthy don’t pay as much because the majority of their income doesn’t come from wages. Misogynoir Photo by Clay Banks on Unsplash There is a level of misogyny and misogynoir prevalent both in the U.S. and heavily exists within the rap industry. We don’t have to look far to see the responses from people regarding Tory Lanez shooting Meg Thee Stallion. There was little to no sympathy for her, everyone said she was lying or wasn’t hurt that bad after being shot twice in the foot. This is an industry built on glamorizing keeping them hos in check. Society has taught them to ridicule, abuse, kill and demoralize Black woman. Why else would they gravitate to a president that has double digit accusations of rape, sexual assault and pedophilia combined? Makes you wonder about these rappers and question what may come out about them in the future. This president has incited his base to violence again Black people, immigrants and people of color, yet they endorse him. Because they see a man before they see white. They see a kindred, abusive spirit who will also let them keep more of their money. These are not rappers struggling to survive. They don’t care about Black people or anyone else if they have to actively demonstrate it. They just have words. An abuser endorsing an abuser is not quite as shocking when you look at it that way. Rapper 6ix9ine is a pedophile and abuser and fresh out of prison he said if he can vote he would vote for Trump. Trump attracts the dregs of humankind. Trump is seen constantly ridiculing and targeting Black women, from news reporters to representatives and they are either happy he does or, again, don’t care because it’s just a Black woman. They’re preoccupied with benefitting from their position as wealthy Black men and closing their proximity to the white patriarchal system to care that, in the end, they are still Black. Proximity does not matter when you already have a strike against you because of your skin color. Cast Them Aside Photo by Nechirwan Kavian on Unsplash Granted, one would think they would understand that their wealth is not a shield for the race. Like any other Black man, all it would take is one trigger happy, police officer indoctrinated in the slave patrol mentality for their life to be snuffed out. But, alas the rich are different. Don’t support people who are actively working against us. Who are more interested in getting close to a predatory, racist, xenophobic, anti-LGBTQ, anti-disabled president than than are helping everyone who isn’t white. 2020 has done one blessed thing in this whole horrific year, we have seen how are favorite celebrities feel. Don’t excuse it, don’t ignore it. Acknowledge it and write them off. After all, they did that to us a long time ago.
https://medium.com/age-of-awareness/these-rappers-choose-with-their-wallets-not-their-values-76a16d15e1d7
[]
2020-11-01 23:14:25.371000+00:00
['Music', 'Culture', 'Equality', 'Race', 'White Supremacy']
On Names
On Names Names are important; they are more than just words, they shape and color the things they name. — From the Journal of Xavier Desmond by George RR Martin SWAT is our on-call and release team. It stands for Site Wellness, Availability, and Triage. While the three tenets are important, and the name is catchy and memorable, it is time to rebrand. Even though the militarization of the police isn’t new, in the last 6 months we’ve seen a marked increase in disturbing events and actions by the police. And while Swatting also isn’t new, it is being used by hate groups to try and silence victims of abuse and harassment within our community (including members of our own team). As Nathan so eloquently pointed out on Slack: The problem with drawing a parallel to a militarized police force is that we’re the people who are unaffected by it (like white and affluent). The tech industry already has a rep for being out of touch. Using notes add proposals for a new name. I’ll add them to the post at the bottom and we can vote on them via highlights. Requirements: Short Memorable Easy to type Easy to say Unambiguous at Medium Does not have to be an acronym Sounds ok saying: “I’m on my _____ rotation next week”, “Have ____ investigate”. Some concepts for inspiration:
https://blog.medium.com/on-names-9eee9a60ee8
['Dan Pupius']
2016-08-15 23:32:49.411000+00:00
['Medium', 'Inside Medium', 'Engineering', 'Company Culture', 'Company']
Scaling Logging: JSON’s Not Enough
JSON is the de-facto logging standard. JSON is so ubiquitous that the popular logging data tools (such as Elasticsearch) accept JSON by default. Although JSON is an evolution over previous logging standards, JSON’s lack of strict types make it insufficient to use for long-term persistence or as a foundation for a data-lake. This post describes the problem with JSON and proposes a solution using a strictly typed interchange format such as Protocol Buffers. The Trouble With JSON JSON logs establish an explicit structure. JSON parsers are available in most languages which make it accessible as a log standard. JSON logs are referred to as “structured” logging. When applications log as JSON it means that they emit fully formed JSON to their log sinks. Common sinks are standard out/error, files, syslog etc. These sinks are commonly connected to a daemon such as flunetd which routes the logs to long-term storage such as S3 or Elasticsearch. The application emits JSON through its logging library. An example could look like: { message: "an important event", client_id: 1, timestamp: "2019-02-04T12:23:34Z", pid: 12121, http: { request: { path: "/some-path" } } } Structured logging is a solution to address the previous era of logs, which are referred to as “unstructured”. Unstructured logs are commonly logged as a single line and require custom parsers (regex) for each different log type. Web server access logs are one example of unstructured logs. Th3 following shows an apache log line: 127.0.0.1 - peter [9/Feb/2017:10:34:12 -0700] "GET /sample-image.png HTTP/2" 200 1479 Parsing this log may require a different solution than parsing an nginx vs varnish vs haproxy vs etc. Clients of unstructured logs need N separate parsing methods, one for each log type. A client of JSON needs a single parser for all logs. A JSON consumer using python can parse any structured log using the same implementation: import json log = json.loads(log_line) Unfortunately, parsing is one of many steps in making logs usable. Logs need to be transformed, aggregated, validated, and stored/loaded. JSON has multiple properties which make it a poor candidate as a logging protocol. Unenforceable Structure Even though JSON is more structured than a text line, JSON does not encode the full structure. This makes logs “schema-on-read”. Applications can send any JSON log, but it’s the readers responsibility to parse and use that log. And it’s not until reading the log that the schema (the full structure of the log) is used. Consider an application that logs: { http: { request: { path: '/test' } } } A consumer that counts paths: import collections import json paths = collections.defaultdict(int) log = json.loads(log_line) paths[log['http']['request]['path']] += 1 With schema on read the structure is only important to the reader of the data. There is no feedback on the writer. The application logging the JSON can log anything and it’s up to the consumer of that data to apply a “schema”. The full structure only becomes important when the log is read. Consider what happens when another upstream application logs a different format: { http: { request: 'http://www.test.com/test' } } This log breaks the log consumer! The burden of consuming this log falls on the consumer! What often happens is the consumer is updated to accommodate this new log format: import json from urllib.parse import urlparse paths = collections.defaultdict(int) log = json.loads(log_line) try: paths[log['http']['request]['path']] += 1 except KeyError: u = urlparse(log['http']['request']) paths[u.path] += 1 Schema on read becomes unmanageable as the number of producers increases. Imagine if there were 5 different structures? 10? 50? In large organizations and/or microservices, one team may be responsible for log ingestion but many teams are responsible for log production. Even with shared tooling such as logging libraries it’s still possible to produce a variety logs due to the lack of feedback (missing schema-on-write) on the producer side. Unknown types Not only is the structure not known until read but the data types are not known either. Consider 3 different log producers: Producer 1 { age: 1 } Producer 2 { age: '1' } Producer 3 { age: null } Even though the structure of each log is the same, the data types change. The consumer needs to account for different structure and different datatypes because of schema on read. Feedback loops Schema on read creates very long feedback loops for log producers. It’s not until the data is read by a downstream consumers validates whether the schema is valid and the usability of the log. Consider a common logging architecture: In this setup: The application logs to syslog or a file fluentd reads from that file or syslog Fluentd sends the logs to a log stream The logs are processed from the log stream and are put in Elasticsearch This log isn’t validated until the consumer tries to use it, due to schema on read. This means that the engineer developing the application sends what they think is a valid log but doesn’t get feedback on its validity later in the log stream. How does an engineer know if their log is failing? Where do they go to figure it out? How do they observe fluentd? The stream? The stream consumer? Elasticsearch. JSON logs using schema on read are difficult to debug. Solving Logging-> Schema on Write An explicit schema combined with schema on write must be used in order to make log consumption sane. The technologies commonly used for explicit data schemas are: Protocol Buffers Avro Thrift The rest of this article will focus on Protocol Buffers. Protocol Buffers requires defining a schema and generating a language specific bindings for that schema. An example log schema may look like: syntax = "proto3"; package log; message Log { required string message = 1; required string level = 2; } A language specific binding needs to be created for each log producer. Afterwards the log producer can generate a protobuf Log object and then serialize that object. The producer receives feedback on the log, structure and types during serialization. Explicit schemas move logs from schema on read to schema on write! The consumer needs a reference to the Log message using the python specific language binding, which is generated using Protocol Buffer standard library. Afterwards, python can parse the log line directly to a Log message. from google.protobuf.json_format import Parse import log_pb3 log = log_pb3.Log() Parse(log_line, log) # log.message # log.level Protocol Buffer provides a fully validated log structure and strict field types! Wire Protocols Protocol Buffers ship with their own binary serialization format. Most Protocol Buffer client libraries (python, node, go, etc) support serializing and deserializing between json and Protocol Buffers. This means they can parse a JSON message into a Protocol Buffer, and output JSON from a Protocol Buffer. This reduces the friction involved in introducing Protocol Buffers into a JSON logging pipeline. Conclusion JSON and schema on read are not sufficient for building multi service log processing systems. JSON’s lack of explicit structure leads to complexity for the log consumers. An explicit schema such as Protocol Buffers, which supports schema on write can address JSON’s issues. Protocol Buffers enable fast feedback for clients on the validity of logs and provides consumers with the full object structure and datatypes of the log, before consuming. These properties are especially important for data lakes where raw logs may be materialized into multiple different systems for consumption. Without an explicit schema JSON quickly becomes untenable for these use cases. Links
https://medium.com/dm03514-tech-blog/scaling-logging-jsons-not-enough-f6920cc2788e
[]
2020-06-08 01:08:27.694000+00:00
['Software', 'DevOps', 'Logging', 'Software Development', 'Software Engineering']
Astonishing Ancient Viking Temple Discovered in Norway
Astonishing Ancient Viking Temple Discovered in Norway 1200-year-old church deeply devoted to Odin and Thor A typical Viking landing — Image by Pixabay Many of us became familiar with the Viking gods Odin and Thor while watching the excellent TV series ‘Vikings’ on the History channel. We learned quickly from how the Norse culture was depicted just how those two gods were revered. That’s what makes actual discoveries like this very special. It validates the way many of us view Viking society and brings it to life. Ancient pagan temple discovered A 1200-year-old temple to Old Norse gods Thor and Odin was recently unearthed in Norway¹. This is an amazingly rare relic from a religion created hundreds of years before Christianity became the primary religion in the region. The temple is a large wooden building that measures around 45 feet long, 26 feet wide, and approximately 40 feet high. Archaeologists estimate that the structure was built sometime during the later eighth century. It was used for worship, and sacrifices to their gods were offered during the midwinter and midsummer solstices. Remains of a Viking settlement — Image by Public Domain First Old Norse temple ever found in Norway The Old Norse culture built a reputation that was both feared and famous. In those days, everyone was horrified by the stories of Viking warriors and sailors coming ashore to raid, rape, and burn their communities². As time passed, these Norse raiders began trading and colonizing Europe and into distant lands like Greenland, Iceland, and Canada. Surprisingly, this discovery is the first Viking temple found in Norway, according to archaeologist Søren Diinhoff from the University Museum in Bergen. “This is the first time we’ve found one of these very special, very beautiful buildings,” Diinhoff said. “We know them from Sweden, and we know them from Denmark. … This shows that they also existed in Norway.” Vikings started constructing large ‘god houses’ during the sixth century. These god houses were far more complicated than previous sites. They were used for worshiping the Old Norse gods. “It is a stronger expression of belief than all the small cult places,” Diinhoff further said. “This is probably something to do with a certain class of the society, who built these as a real ideological show.” The story behind the discovery The god house became a link between the Old Norse gods and local people. They believed that their gods lived in the heavenly realm of Asgard. A ‘rainbow bridge’ called Bifröst was what connected the earthly realm Midgard to Asgard. This ancient god house was unearthed at Ose, a seaside town close to Ørsta in western Norway. The area had been zoned for new housing development. Post-holes of the structure that outlined its unique shape and primary central tower were unearthed at the site. It is believed that the Ose god house hosted sacrificial fires, as it had wooden statues of the war god Odin, the storm god Thor, and Freyr, the fertility god. The site is beside the coast among mountains and inlets, about 150 miles south-west of Trondheim’s modern city. Boathouses would have been built along the shore in ancient times. A Viking earth grave — Image by Pixabay God house The discovery site is located on the coast alongside various inlets and mountains. Boathouses were likely built along the shoreline. It would be hard to find a more fitting location for a Viking settlement. This excavation also uncovered evidence of earlier settlements that dated as far back as 2500 years ago. However, the god house remains at Ose came from a period when wealthy families dominated the region. This resulted from Scandinavian society mingling with the elite members of both the Germanic tribes of northern Europe and the Roman Empire. “When the new socially differentiated society set in, in the Roman Iron Age, the leading families took control of the cult,” Dinhoff added. He also emphasized how Norse religion became more organized and ideological and that the god houses at Ose used Christian basilicas as a model for their structures. Suppression of worship During the 11th century, Norway’s kings suppressed the Old Norse religion while forcing Christian religion upon their subjects³. They ordered all Vikings places of worship to be destroyed, burned down, and torn down. At this point, there’s no indication that this movement targeted the Ose god house. Yet many feel that more analysis in the future will reveal that it was among the pagan structures destroyed by these Christian kings.
https://charliestephen6.medium.com/astonishing-ancient-viking-temple-discovered-in-norway-27a63f0fc856
['Charles Stephen']
2020-11-04 16:04:47.044000+00:00
['Society', 'Ancient History', 'Religion', 'History', 'Vikings']
Playing Safe Ensures Mediocrity for Product Owners
Don’t Run From Conflicts. Engage in Passionate Discussions. Being a Product Owner is exciting yet daunting. Every day you have tons of decisions awaiting you, but one is quite impactful: do you want your day to be exciting or boring? If you want an exciting day, don’t run from conflicts. Otherwise, your day is going to be pretty dull. A typical day for a Product Owner has endless opportunities for conflicts; how you deal with them will ultimately define how successful you can be. Running away from conflicts will lead you to fake harmony and slower progress. But if you are not afraid of conflicts, you’ve got great chances of success. “It’s as simple as this. When people don’t unload their opinions and feel like they’ve been listened to, they won’t really get on board.” ― Patrick Lencioni, The Five Dysfunctions of a Team: A Leadership Fable Stakeholders’ happiness is pointless. Alignment is what matters. Wherever you work as a Product Owner, stakeholders will always have more wishes than the Scrum Team’s development capacity. If you aim to please everyone by doing a little bit of everything, frustration is inevitable. Although stakeholders may argue everything is essential and urgent, we all know that’s just their perception. But if you try to negotiate with each stakeholder at a time, you will have a hard time coming to an agreement. A rather bold approach is to have a conversation with all stakeholders at once. But guess what will happen? Conflicts. Once you have all stakeholders in the same meeting, the only certainty you have is: conflicts. To reach commitment, as a Product Owner, you should foster conflicting discussions. If everyone has the chance of unloading their perspectives, they can commit to the decisions. A successful way of conducting this meeting could be: Set a goal to achieve : the Product Owner shares the goal to achieve. For example, “we have to double our recurring users by the end of the quarter. What could we do to make it happen?” : the Product Owner shares the goal to achieve. For example, “we have to double our recurring users by the end of the quarter. What could we do to make it happen?” Be a facilitator : observe how the stakeholders interact. Search for unsolved conflicts and bring them to the table. For example, John has a request disconnected to the goal, but he has the highest pay-check in the room. If nobody challenges him, you should do it. You could ask: “John, I don’t see how your request puts us closer to the goal. Could you help us understand it?” : observe how the stakeholders interact. Search for unsolved conflicts and bring them to the table. For example, John has a request disconnected to the goal, but he has the highest pay-check in the room. If nobody challenges him, you should do it. You could ask: “John, I don’t see how your request puts us closer to the goal. Could you help us understand it?” Clear agreements: as the stakeholders unload their opinions; you need to make clear agreements. Write it down what will be done and what will not be done. Ask the group if they can commit to that; if not, ask why. A joint prioritization meeting works if you ensure everyone shares their opinion. That’s why you should have a proper timebox for it. Don’t try to shorten the meeting. Everyone has to express their opinion. Otherwise, forget about commitment. The result of the meeting is the alignment between the stakeholders. Not everyone will leave the room with something prioritized for the upcoming sprints. Still, everyone should be happy because the group made the right decisions for the product. Reaching commitment is what matters the most. Photo by Headway on Unsplash If your choice is to play safe and avoid conflicts with stakeholders, they will annoy you with new wishes every day. By saying “yes” to everything, you ensure the unhappiness of everyone. Developers will be fed up with interruptions. End-users will be lost with a confusing product. Stakeholders will be unhappy if no real value is delivered. Tension between Developers and Product Owners lead to meaningful solutions. Product Owners are responsible for defining which problems are worth solving and why to work on them. The entire Scrum Team is responsible for the solution, and the Developers are accountable for the implementation. A thin line separates Product Owners and Developers; I call it the how line. If Product Owners step on it, Developers may not welcome them with arms wide open. From my experience, if Developers don’t challenge the Product Owner, we risk building too simple solutions. And suppose Product Owners don’t challenge Developers. In that case, we risk to develop solutions that are too complex for the problem we want to solve. Conflict is a mandatory ingredient for success. A Scrum Team cannot be successful by playing safe. Suppose the Product Owner brings a straightforward solution to implement instead of a problem to solve. Developers should ask what kind of problem the Product Owner wants to be solved instead of discussing how to implement the solution without knowing the problem. Suppose Developers try to over-engineer the implementation to avoid non-existing problems. Product Owners should bring more context to ensure Developers are not wasting their time by building something not needed. Simplicity — the art of maximizing the amount of work not done — is essential. — Agile Manifesto Product Owners and Developers may not appreciate being often challenged. It might create a tense environment, but it will surely avoid focusing on something that doesn’t matter. Open discussions lead to discovery, while no discussions lead to mediocrity. A Scrum Team without conflicts will never be a high-performing team. The fear of conflict blocks teams from evolving. Patrick Lencioni insists that teams without trust cannot become an outstanding team because they will live in an artificial harmony.
https://medium.com/serious-scrum/playing-safe-ensures-mediocrity-a290add8aa4e
['David Pereira']
2020-12-07 16:36:35.835000+00:00
['Agile', 'Product Management', 'Leadership', 'Software Development', 'Startup']
How to Load Third-Party Scripts Dynamically In React
What Do We Do Then? One of the many solutions can be to create a function that we can call on the pages where we need the third-party library and to dynamically create and inject the <script></script> tag into the <body></body> of the application. Suppose we want to integrate the Google Maps API through a script and show the map on one of our pages. Let’s look at an example: const loadGMaps = (callback) => { const existingScript = document.getElementById(' googleMaps '); const script = document.createElement('script'); script.src = ' if (!existingScript) {const script = document.createElement('script');script.src = ' https://maps.googleapis.com/maps/api/js '; script.id = ' googleMaps '; document.body.appendChild(script); script.onload = () => { if (callback) callback(); }; } if (existingScript && callback) callback(); }; export default loadGMaps; Let’s try to understand what our function is doing. We begin by trying to detect whether or not we’ve already loaded Google Maps by looking for a <script></script> tag with its id set to googleMaps . If we do, we stop — as there’s no need to load the library twice. If we don’t find an existingScript ; we create the script dynamically. We start by creating an empty <script></script> tag in the memory as script and then assign the necessary attributes to its src and the id to identify the script later. Finally, we append the script to our <body></body> tag to actually load this.
https://medium.com/better-programming/loading-third-party-scripts-dynamically-in-reactjs-458c41a7013d
['Muhammad Anser']
2020-07-14 05:41:17.891000+00:00
['JavaScript', 'Front End Development', 'React', 'Reactjs', 'Programming']
How to Improve your Business by Smartly Using Feedback from clients and customers
How to ask for feedback: Every client, new or old, can help you with his or her knowledge about the service or product you provide. These clients can help you to solve their problems better and faster, but also increase your sales and give you fresh ideas to attract new clients. But the only way to get them to share this knowledge is by asking them the correct things and implementing their feedback. Below, you will find 11 good questions to ask for feedback: 1: Are you happy with the results from our product/service? Just a simple Yes or No can make a world of difference and might open up new questions to ask and find the core problem in case they are not pleased with the result. 2: Please tell us more about the interaction with our employees: Find out if your employees (if applicable) did all they could to give your clients a pleasant experience. Try to find out if your service can be better in any way, in order to help your clients in a better way next time. 3: Could you easily find what you needed? If you cannot provide the correct service for your clients, they will not come back but instead go to your competitor. Being easy to find will help other prospects to find you and work with you, you can think of an easy to find website with the correct keywords etc. Learn how accessible you are for prospects and try to think of ways to improve your ‘findability’. 4: How easy was it to buy our product/service? After prospects find you they want to buy from you if there is a good match, but if your prospect has to wait in line for a long time they might choose not to buy from you. There can be many things that will make people walk away from you, so try to make buying from as easy as possible. 5: How likely is it for you to repeat your business with us? The answer you want is very likely, and you should always aim to get this answer. But when you get a negative answer more often than once, then try to find a pattern. And when you find the pattern/reason, fix it…ASAP… 6: Did you feel comfortable working with us? If your customers feel underwhelmed or disappointed with your product or service, it could be a signal for you to adjust your service and/or the customer journey accordingly. 7: What is the positive thing you remember most from your experience with us? Let’s stay positive and ask for positive feedback, knowing the answer to this insightful question will make you smile. 8: Why did you decide to do business with us? Customers have many choices when it comes to buying, but there is a main reason why they choose to do business with you. Knowing this reason will give you a better insight into your main strength when it comes to attracting more customers and differentiate you from your competitors. 9. Is there a service or product you wish for, but that we do not offer at the moment? Maybe you will be able to expand your business using the answers to this question. 10: How can we exceed your expectations? Meeting expectations is normal for you, but knowing how to make your customers say ‘wow’ will make you stand out from the crowd. 11: Is there something else you would like to share with us regarding your experience with us? You cannot have the perfect questions to ask each customer, therefore it’s a good idea to give them an opportunity to say more. Always ask customers for feedback in order to learn more about your own business. Try to ask them right after you delivered your product or service to keep their minds fresh. Incorporate fresh or even real-time feedback into your customer service model and enjoy open communication with your customers. This will lead to long term clients and consequently better business results.
https://medium.com/swlh/how-to-improve-your-business-by-smartly-using-feedback-from-clients-and-customers-71362b7ecd2a
['Eric Jan Huizer']
2019-09-18 16:49:07.253000+00:00
['Entrepreneurship', 'Business', 'Business Development', 'Feedback', 'Improvement']
How I Plan on Saving 83% of My Income In 2021
How I Plan on Saving 83% of My Income In 2021 Without making any drastic changes to my lifestyle Designed by Freepik In 2020, I wrote quite a bit about the importance of saving more of the money you already have and finding ways to make more money. Today, I want to go deeper on these concepts and take them from the hypothetical to the real world and detail my plan to save 83% of my income in 2021. Calculating my savings rate We need to start by establishing a shared understanding of what I mean when I say I plan on saving “83% of my income in 2021.” To do that, you’ll need to know the difference between gross and take-home pay. Gross pay= The total amount you make before any taxes or deductions are considered. Take-home pay= The total amount you make after taxes or deductions. This is the amount that hits your checking account on payday. If my monthly take-home pay was $10,000, I would need to be saving $8,300 per month to reach my goal. What I consider “savings” Some people overthink their savings rate by only including money that is invested or other arbitrary rules. I consider savings to be any use of money that will increase my net worth. For me, that includes the principal portion of my mortgage as every dollar of principal paid increases my net worth by a dollar. However, I do not include savings that are intended for future consumption. For example, I have a savings account I use as a vacation fund. It’s been accumulating a decent balance in 2020 because I have not had the opportunity to travel. I don’t consider it as savings because the purpose of this money is to be spent in the near term. My three sources of income in 2021 My day job. My business. Portfolio income. The paycheck from my day job This still accounts for the largest proportion of my total income. A few years ago, this would have accounted for 100% of my income; in 2021, it will account for roughly 47% of my income. This is also the income source I use to cover just about every living expense. However, I still manage to save quite a bit of money from my paycheck, even after paying for all my living expenses. Mortgage principal. Contributions to my workplace retirement plan (matched by my employer.) Contributions to individual retirement accounts. Savings into my Son's college fund. Cash into an emergency fund. I recently added it all up and found that I am saving approximately 60% of my take-home pay from my day job. Since that accounts for roughly 47% of my total income, that means I am saving 28% of my total income just from my paycheck. The net income from my business In 2020, the net income from my business after expenses and taxes was just over half of the take-home pay I earned from my day job. In 2021, my goal is for my business income to equal my take-home pay from my job. I don’t write this to boast, but to highlight what I think is the most important personal finance concept I wrote about in 2020, which is the power of adding scalable income to the income you earn from a 9–5 job. As I already mentioned, all of my living expenses are paid for using the income from my 9–5. That allows me to invest 100% of the profits from my business. It has been a lot of work building a business while working full time, but if you can make it work, you get the best of both worlds; stability and reliability from your paycheck and upside potential from the business. In my opinion, a digital business with a limited need for customer service is the best bet for someone looking to start a business on the side. It allows you to be completely flexible when you work, which is crucial if you are holding down a traditional 9–5 job. Since I save 100% of the income from my business and I project it to make up 47% of my total income in 2021, that increases my savings rate by 47% and adding in my savings from my paycheck brings total savings up to 75%. Portfolio income As a first-generation wealth builder in my early 30’s, the income generated by investments is by far my smallest source of income, representing roughly 8% of my income. This is exactly as it should be; in your 20s, 30s, and 40s, your human capital should account for most if not all of your income. If you save and invest a healthy percentage of the money you make from work, as you get into your 50s and 60s, the income from your investments should start making up a larger share of your income until it reaches a point where it can convert all of your living expenses. If you are someone who adheres to the Financial Independence, Retire Early (FIRE) movement, your goal would be to amass a big enough portfolio that your investment income could cover your living expenses by your 30s or 40s. While I admire and agree with the spirit of the FIRE movement, I think it places too much emphasis on financial capital and not enough emphasis on human capital. If your goal is to save 25 times your annual living expenses by your 30s or 40s, you are placing immense pressure on yourself. Using the (flawed) math behind FIRE, you would need $1 million invested to sustain a $40,000 per year lifestyle. In a world where so many are struggling to get by, that seems unrealistic for most. That’s why I have come up with a personal definition of financial freedom. You have achieved financial freedom when you can spend your days doing work that you love without worrying about how you will pay the bills. At first, glance that might seem a lot like the definition of FIRE. But unlike FIRE, using my definition, you don’t need to pay for the bills relying solely on your financial capital. I put much more emphasis on leveraging your human capital. If you can find a way to make good money doing something you love, that sounds like financial freedom to me. The best part is that you don’t need to have $1 million in the bank to get there. You can rely on some combination of income from your human capital (work) and financial capital (investments) to pay the bills. By focusing on increasing my human capital (starting a business), I am on track to save 83% of my income in 2021. This, in turn, allows me to amass more financial capital. As a result, I might be able to write an article in 2022 titled “How I became a full-time entrepreneur.”
https://medium.com/makingofamillionaire/how-i-plan-on-saving-83-of-my-income-in-2021-9a97b5f4f6c1
['Ben Le Fort']
2020-12-17 15:52:35.318000+00:00
['Life Lessons', 'Personal Finance', 'Business', 'Money', 'Entrepreneurship']
Engineering growth: introduction
Until recently, the process around progression and promotion as a Medium engineer was relatively opaque. We did have the concept of levels, and senior leadership were reasonably well calibrated on what being at a certain level entailed. However, engineers did not know what their level was, or the kinds of work they specifically needed to do to progress in their career. Instead, we relied upon the trust relationship between an engineer and their group lead, and in senior engineering leadership to do “the right thing”. Although we in the leadership team were thoughtful about these areas, and did our best to be fair, the lack of transparency was frustrating for some, and led others to express reasonable doubts about the process. This was at odds with one of our company values, Build Trust. To remedy this, in late August we rolled out our Growth Framework, a set of documents and tools that described what we value at Medium, how to progress, and how we measure and reward that progress. In doing so, we attempted to build a thoughtful process that was equitable, incentivised the right kind of work, and which encouraged the growth of a robust, flexible, and inclusive team. Although development and delivery of the framework was led by Jamie Talbot and Madeline Bermes, it is the result of input and collaboration from the whole engineering team at Medium over many months. Today, we are excited to make our Growth Framework public for everyone to see. We’re releasing all this material under a Creative Commons Attribution-ShareAlike license. You are welcome to take this work, build on it, and make suggestions for improvements. Medium strongly believes that our industry would benefit greatly if we shared and built upon each other’s organisational processes the same way we share and build upon each other’s open source code. In addition to this introductory piece, there are five living documents which describe the specifics of the framework, and three tools that we use to plan, assess, and memorialise growth. (Thanks to Alex Wolfe for art direction of the written documentation!) We will modify and improve all of these over time. As with our hiring documentation, these public documents and tools are the actual ones that we use at Medium. There are no separate internal versions of these guides. Part 1: Framework overview The framework overview describes the major characteristics of the framework, the problems we are trying to solve, and related areas like salary and titles. Part 2: Tracks Medium engineers add value to the engineering organisation in many ways, and we attempt to capture this by recognising progress in 16 areas. The tracks document explains the rationale for each track. Part 3: Assessing progress Assessing progress explains how we determine whether an engineer has reached a given milestone in a track, as well as the cadence at which we check in on progress and make formal assessments. It also discusses the tension between subjectivity, objectivity, and bias, and acknowledges the role that opportunity plays in advancement. Part 4: Appeals process Given that the framework is somewhat subjective, we accept that even reasonable people acting in good faith can make mistakes or fail to appropriately value work. The appeals process outlines the way in which engineers can challenge a decision they consider to be inaccurate. Part 5: Frequently asked questions Even with exhaustive documentation, people still have questions. Frequently asked questions attempts to answer them. The rubric is the source of truth that defines the progression of engineers at Medium. It provides descriptions for each milestone along with example tasks and behaviours that, taken together, illustrate our expectations of engineers as they advance in their careers, without being overly prescriptive. Snowflake is a simple web tool that we use to have growth conversations with engineers. It lets us show them how progress along certain tracks will affect their overall level, and helps them make decisions about how they want to grow. You can find and fork the code on GitHub. Props to Emma Zhou for building this in her spare time, while simultaneously delivering Claps, and Jonathan Fuchs for later iterations! We like telling stories at Medium, and we like to think of our engineers’ progress at Medium as a continuously unfolding story, which includes things like formal reviews and peer feedback. The Medium Story template is a Google Doc template that lets us celebrate and memorialise an engineer’s growth over time, and provides a historical record that we can use to frame growth conversations.
https://medium.com/p/8ba7b78c8d6c
['Medium Engineering']
2019-07-01 13:42:19.624000+00:00
['Medium', 'Engineering', 'Professional Development', 'Values']
Going Rogue, Part 2: Phantom Types
In the last post, we introduced Rogue, the type-safe DSL we implemented for writing queries against MongoDB. In this post we’ll extend QueryBuilder to support query sort ordering, while making sure we can’t build any nonsense queries like this: Venue where (_.mayor eqs 1234) limit(10) fetch(100) Sort ordering Here’s how we left the QueryBuilder class: class QueryBuilder[M <: MongoRecord[M]]( rec: M with MongoMetaRecord[M], clauses: List[QueryClause[_]], lim: Option[Long]) { def where[F](clause: M => QueryClause[F]): QueryBuilder[M] = new QueryBuilder(rec, clause(rec) :: clauses, lim) def and[F](clause: M => QueryClause[F]): QueryBuilder[M] = new QueryBuilder(rec, clause(rec) :: clauses, lim) def limit(n: Long) = new QueryBulder(rec, clauses, Some(n)) def fetch(): List[M] = … def fetch(n: Long) = this.limit(n).fetch() } To implement sort ordering, we just need to add an order field and the orderAsc() and orderDesc() methods: class QueryBuilder[M <: MongoRecord[M]]( rec: M with MongoMetaRecord[M], clauses: List[QueryClause[_]], order: List[(String, Boolean)], lim: Option[Long]) { // … def orderAsc[F](field: M => QueryField[M, F]): QueryBuilder[M] = { val newOrder = (field(rec).field.name, true) :: order new QueryBuilder(rec, clauses, newOrder, lim) } def orderDesc[F](field: M => QueryField[M, F]): QueryBuilder[M] = { val newOrder = (field(rec).field.name, false) :: order new QueryBuilder(rec, clauses, newOrder, lim) } } We just pass along the order fields to the query executor, and that’s it! Now we can do this: Venue where (_.venuename startsWith “Starbucks”) orderDesc(_.popularity) fetch(5) The fieldToQueryField implicit conversion we defined before does most of the work. Here it is for reference: implicit def fieldToQueryField[M <: MongoRecord[M], F](field: Field[M, F]) = new QueryField[M, F](field) Canned queries But there’s still a problem we’d like to avoid. We’ve found it pretty convenient to write methods that construct “canned” queries. The caller can take those canned queries and further refine them by adding additional where clauses or sort orderings before calling fetch() . But consider a case like this: def findByCategory(cat: String) = Venue where (_.categories contains cat) orderDesc(_.popularity) // elsewhere … findByCategory(“Thai”).orderAsc(_.venuename).fetch() The caller might not be aware that findByCategory() returns a query ordered by popularity, and tries to supply a sort ordering of its own. Instead of a query sorted on venue name, the caller gets a query sorted by popularity then venue name, which is unexpected. Similarly, consider the case where a method returns a canned query that includes a limit() modifier: def findMostPopular(cat: String) = Venue where (_.categories contains cat) orderDesc(_.popularity) limit(10) // elsewhere … findMostPopular(“Speakeasy”).fetch(20) It’s not clear which limit modifier should take effect — maybe the canned query method has a good reason for limiting the query, or maybe the caller knows best. The Phantom of the Sca-laaaaaaa Using phantom types, we can get the compiler to prevent both of these situations from happening. The idea is to declare some types that cannot be instantiated and serve no other purpose but to appear in the type parameters of another class: abstract sealed class Ordered abstract sealed class Unordered abstract sealed class Limited abstract sealed class Unlimited First we need to add type parameters Ord and Lim to the type signature of QueryBuilder . class QueryBuilder[M <: MongoRecord[M], Ord, Lim]( rec: M with MongoMetaRecord[M], clauses: List[QueryClause[_]], order: List[(String, Boolean)], lim: Option[Long]) { // … } Keep in mind that Ord and Lim are just type parameters, named to indicate what phantom type values they will take on in an actual instance of QueryBuilder . Next, we need to initialize the type parameters when the builder is first created. This happens in the implicit conversion from MongoRecord to QueryBuilder : implicit def metaRecordToQueryBuilder[M <: MongoRecord[M]] (rec: MongoMetaRecord[M]) = new QueryBuilder[M, Unordered, Unlimited](rec, Nil, Nil, None) Then, for each method that introduces a constraint on the builder, we return a new builder instance with updated type parameters: def limit(n: Long) = new QueryBulder[M, Ord, Limited](rec, clauses, Some(n)) def orderAsc[F](field: M => QueryField[M, F]): QueryBuilder[M] = { val newOrder = (field(rec).field.name, true) :: order new QueryBuilder[M, Ordered, Lim](rec, clauses, newOrder, lim) } def orderDesc[F](field: M => QueryField[M, F]): QueryBuilder[M] = { val newOrder = (field(rec).field.name, false) :: order new QueryBuilder[M, Ordered, Lim](rec, clauses, newOrder, lim) } The remaining methods leave the type parameters unchanged: def where[F](cl: M => QueryClause[F]): QueryBuilder[M] = new QueryBuilder[M, Ord, Lim](rec, cl(rec) :: clauses, order, lim) def and[F](cl: M => QueryClause[F]): QueryBuilder[M] = new QueryBuilder[M, Ord, Lim](rec, cl(rec) :: clauses, order, lim) Enforcing the constraints Now that we have the constraints set up, we need to enforce them. This is done using the =:= class, defined in scala.Predef as follows: sealed abstract class =:=[From, To] extends (From => To) object =:= { implicit def tpEquals[A]: A =:= A = new (A =:= A) { def apply(x: A) = x } } Basically, this sets it up so that the compiler can only generate instances of =:=[A, A] . If you make a method take an implicit argument of type =:=[A, B] , you are constraining types A and B to be the same in invocations of that method. Here is how we use it in QueryBuilder : def limit(n: Long)(implicit ev: Lim =:= Unlimited) = new QueryBulder[M, Ord, Limited](rec, clauses, Some(n)) def fetch(n: Long)(implicit ev: Lim =:= Unlimited) = this.limit(n).fetch() def orderAsc[F](field: M => QueryField[M, F]) (implicit ev: Ord =:= Unordered): QueryBuilder[M] = { val newOrder = (field(rec).field.name, true) :: order new QueryBuilder[M, Ordered, Lim](rec, clauses, newOrder, lim) } def orderDesc[F](field: M => QueryField[M, F]) (implicit ev: Ord =:= Unordered): QueryBuilder[M] = { val newOrder = (field(rec).field.name, false) :: order new QueryBuilder[M, Ordered, Lim](rec, clauses, newOrder, lim) } So for instance, the compiler will be able to supply the implicit ev argument to limit() only when the type parameter Lim is the type Unlimited . When the compiler cannot supply a value for the ev argument, a compiler error will occur. We’ve solved one problem but created another — now it is impossible to specify more than one sort ordering. We can get around this pretty easily, though, by providing two additional sort methods that require the builder to already be in the Ordered state: def andAsc[F](field: M => QueryField[M, F]) (implicit ev: Ord =:= Ordered): QueryBuilder[M] = { val newOrder = (field(rec).field.name, true) :: order new QueryBuilder[M, Ordered, Lim](rec, clauses, newOrder, lim) } def andDesc[F](field: M => QueryField[M, F]) (implicit ev: Ord =:= Ordered): QueryBuilder[M] = { val newOrder = (field(rec).field.name, false) :: order new QueryBuilder[M, Ordered, Lim](rec, clauses, newOrder, lim) } This way, when a method returns a canned query that is already ordered, the caller is reminded of this fact by the need to call andAsc() instead of orderAsc() : def findByCategory(cat: String) = Venue where (_.categories contains cat) orderDesc(_.popularity) findByCategory(“Thai”).andAsc(_.venuename).fetch() For serious Some of these constraints might seem a little contrived, but let me give you a case where constraints like this can really prevent you from doing something dangerous. We added support for modify and delete queries to Rogue; they look something like this: Venue where (_.closed eqs false) orderAsc(_.popularity) limit(100) modify (_.closed setTo true) updateMulti() Venue where (_.closed eqs false) orderAsc(_.popularity) limit(100) bulkDelete_!!() These queries purportedly close (or delete) the 100 least popular venues. The only problem is, MongoDB does not support limits on delete and modify queries! So the limit(100) will be silently ignored. Very bad! Thankfully, we can just throw an (implicit ev: Lim =:= Unlimited) on the bulkDelete_!!() and modify() methods, and the compiler will catch this for you. Whew! There’s a lot more we’ve built into Rogue — special operations on list and map fields, pagination, support for geospatial queries, and the ability to select only certain fields from a record, to name a few. We’re thinking about adding support for using QueryBuilders in for-comprehensions, once we figure out what kind of monad a QueryBuilder is! We’ll save that for another post though. In the meantime, you can check out the Rogue sources on github. - Jason Liszka and Jorge Ortiz, foursquare engineers
https://medium.com/foursquare-direct/going-rogue-part-2-phantom-types-d18ece7235e6
[]
2018-07-31 18:59:29.988000+00:00
['Engineering', 'Foursquare']
Interview: Grey DeLisle Talks about The Zombies, Petunia and The Vipers, and the Musical Holy Trinity
Grey DeLisle — ‘O Holy Night’ Interview: Grey DeLisle Talks about The Zombies, Petunia and The Vipers, and the Musical Holy Trinity Randall Radic Follow Dec 17 · 4 min read Grammy-winner Grey DeLisle unveiled the music video for her interpretation of “O Holy Night” a while back. You already know who Grey is because you’ve heard her voice on myriad cartoons. Not only is she a grand singer, but she’s a stellar voice over artist. Her roles include Vicky of The Fairly Odd Parents, Samantha “Sam” Manson of Danny Phantom, and Mandy from the Grim Adventures of Billy & Mandy. In 2021, Grey is releasing a new album, Willie We Have Missed You, via Regional Records, along with a music video for each track. Grey’s rendition of “O Holy Night” trembles with passion and hints of country gospel, resulting in intimate, dazzling textures. The video shows Grey in the studio recording the song amid an array of sound equipment which, when juxtaposed against the simple beauty of the song, imbues Grey’s voice with a transcendent quality. Catching up with Grey DeLisle, Pop Off spoke with her about how she got started in music, her influences, and what she’s up to right now. What three things can’t you live without? Strong coffee, humor, and I hate to say this but, apparently, my PHONE because I recently lost it and my entire world ground to a halt and I thought I was gonna die. What motivated you to cover “O Holy Night” rather than another Christmas song? It’s always been a favorite. I remember growing up in church and not being very moved by any of the Christmas music there….but ‘O Holy Night’ and that soaring chorus always got me. What do you hope your fans/listeners take away with them when they listen to your music? Well…with everything that’s happening in our world right now, I know I’m taking lots of comfort in my favorite music so I hope I’m able to offer a bit of that to others as well. A musical hug! How did you get started in music? Does it tie into your career as a voiceover actor? I come from a very musical family. My grandma Eva Flores used to sing with the Latin bandleader Tito Puente and my mom was always in local bands when I was growing up. My uncle always had a guitar at family gatherings and we would make tapes of us all harmonizing and singing Grandma’s old boleros or The Beach Boys. Which musicians/vocalists influence you the most? I’m a giant fan of the Musical Holy Trinity (Linda Ronstadt, Emmylou Harris, and Dolly Parton!) and my dad used to play a lot of old-time country when I was growing up. The Carter Family, Hank Williams…he also loves Slim Whitman! Grey DeLisle Which artists are you listening to right now? I just acted in a short film called “The Musicianer” by the Grammy-Nominated filmmaker Beth Harrington (Welcome to the Club, The Winding Stream, etc.) and my co-star was Petunia of Petunia and the Vipers! Since we wrapped, I’ve become a huge fan! Download “The Cricket Song!” Glorious! I’ve also been listening to Odessey and Oracle by The Zombies. It’s an old favorite. It makes me happy. In 2021, you’ll be releasing a new album, Willie We Have Missed You, via Regional Records. What can you share about the album? Well, it was initially supposed to be mostly older unreleased material, but I’ve been writing like crazy so we might add a few new originals. I have worked with so many incredible songwriters in the past and always wanted to cover their songs. I finally got to do my own renditions of Murry Hammond’s “Valentine” and Marvin Etzioni’s “You Are The Light,” and I’m so excited to share my take on those beloved tunes. Do you have a guilty music and/or entertainment pleasure? My favorite TV Show right now is PEN15. It’s a genius show…so that’s not the guilty part. The guilty part is that I watch it with my 14-year-old son. It’s pretty racy at times, but Tex assures me that he’s seen worse on TikTok so…. Any advice for young females out there who want to do music or get into acting? Create your own content! Never let anyone else tell you what you can and can’t do. If you make your own art, you don’t have to ask permission anymore. Nobody gets to decide if you are worthy but yourself! I started my own label (Hummin’bird Records) when I was 25 so I could put out my own records. The internet has made it even easier now to get your art into the world! What’s next for you? Well the cartoon stuff is keeping me pretty busy! I recently joined the cast of The Simpsons and I have some incredible roles in the new Invincible series with JK Simmons and John Hamm…still doing Daphne on various Scooby Doo projects…and I’m also homeschooling 3 kids during a pandemic so my plate is FULL. But I’ve written 8 new songs in the past week so there will surely be more than one new record released in 2021! When the world decides to re-open….I’ll be READY! Follow Grey DeLisle Website | Facebook | Instagram | Twitter | Spotify
https://medium.com/pop-off/interview-grey-delisle-talks-about-the-zombies-petunia-and-the-vipers-and-the-musical-holy-e7cd96d390c7
['Randall Radic']
2020-12-17 12:48:40.145000+00:00
['Grey Delisle', 'O Holy Night', 'Interview', 'Christmas Music', 'Music']
Five Amazing Tips in this Article Will Change you forever
1- Read more articles on Medium.com about writing more articles on medium .com so you can write more articles on Medium.com so your readers can read more articles and write more articles for their readers. Medium.com is not a pyramid scheme getting everybody to write on this platform at all. 2-I have read as many books as there are stars in a brightly lit street on a new years eve. Out of those books a very popular and common book I have found some tips and tricks that you will find these anecdotes available anywhere else on the internet, but I will attach a very relevant picture of hot guy/girl posing in a way that distorts your perception of reality or I will just post a picture of a photoshoot my friend did with his very expensive iphone. A very relevant picture for this article that must motivate you. 3-Wake up at a time of the day when even a rooster is surprised to see you outside. In that half dead state you better run boy/girl. Even if it is for ten minutes you better move that b***. Make sure to jot down everytrhing you have experienced and write an article on medium.com about you waking up early for a week out of total thirty years of your life. That way even if you never wake up on time again you will have immense gratification from other readers who will clap for you fifty times each and for a second you will know that life exactly as you want it to be. 4-Drink Water every waking second of your day if you want your skin to look like a light bulb. You will never need artificial lighting in your house since you will be glowing all day everyday. Going to the washroom every thirty minutes just to relieve yourself is absolutely normal and very responsible. Other benefits include more energy and focus. Your bench Press max will definitelyimprove from 10 lb bar to atleast 11 lb because you have been drinking all the water that was ever available to you. Other than that the greatest benefit of drinking as much water as possible is you are a gag away from puking all over the floor, which is a very useful tip if you are a pelican and feed your babies through regurgitation. Then repeat the same regurgitation on medium.com when you write about your experience so everybody who is trying to fix thigs for themselves can be blessed with the same super powers. You guessed it, this animated guy sure had his fill of water 5- Final advice is read, every waking second when you are not writing on medium.com or drinking water or talking about it, you should be reading about it. I am absolutely not aware of the fact that the first tip was about reading and I am not reiterating the fact on purpose.
https://medium.com/bulletproof-writers/five-amazing-tips-in-this-article-will-change-you-forever-d253cf62432
['Talha Khan']
2020-12-11 20:26:31.610000+00:00
['Life Lessons', 'Humor', 'Health', 'Life', 'Sarcasm']
The case for adding TypeScript to your JavaScript Applications
The case for adding TypeScript to your JavaScript Applications History Undeniably, JavaScript is the programming language of choice for most web developers, and it has been the “language of the internet,” for over two decades. During the early days of the internet, websites were mostly static pages, using HTML and some basic styling; a client would make a request to a server, and the server would respond with these static pages. However, it was the desire of many early web-developers to make these websites more dynamic in nature. One specific person, Marc Andreessen, who was the founder of Netscape Communications, believed that websites with animations and interactivity would be the “web of the future.”¹ This was at a time when Netscape was becoming more popular on the web with their browser “Netscape Communicator.” There was a need for a scripting language to make simple changes to the DOM (Document Object Model) used by static websites; this language needed to be easy-to-use and cater to people who “designed,” rather than those who “programmed.” In came Brendan Eich, who worked with Netscape to create “Mocha,” which later became “LiveScript,” and finally “JavaScript.” Many of the features JavaScript is still known for today were designed into the language from the very beginning. As JavaScript has grown over its many years of web scripting, it has expanded its functionality as a language capable of operating in many more paradigms than it was originally intended. These include: server applications (node.js), mobile applications (React-Native) and even desktop applications (electron.js). As this once simple-language has expanded itself, many of its inherent limitations have surfaced and has caused frustration among those who prefer to work with more “traditional” programming languages such as Java and C#; this is even more apparent when enterprise-level server applications are created using these languages as opposed to JavaScript. What is it about these programming languages that make them easier to use for many programmers? JavaScript Limitations Perhaps the most notable limitation that JavaScript has in its traditional form is its lack of “type-checking,” that is part of the syntax of many other languages. What exactly is this? In simple terms, type checking allows for pieces of your code to be checked for type safety before it is compiled. This means that if a variable is being placed in a part of your application, where it will not perform correctly because its “type,” is invalid, then this will be apparent before the build of that application. In JavaScript, there is no checking for the type of an input variable. A very simple example of this is when math is being performed on a variable being input into a function: const addNumbers = (num) => { return num + 1; } In JavaScript, if that input variable is (or could be) a string, such as “1”, then this would produce unexpected results, and not even warn the programmer of this possibility before the program starts running. While “if” statements could be used in many places to account for this, it is certainly a cumbersome solution, and definitely not the preferred solution for JavaScript programmers. In Comes TypeScript So the question becomes: how can we combine the more sophisticated features that many other languages support (such as this type safety) while still using a familiar JavaScript syntax? This is where TypeScript has become the primary choice for many would-be JavaScript programmers. TypeScript is an open source superset of JavaScript that was created by and is maintained by Microsoft. In a TypeScript application, this same code would be written as: const addNumbers = (num: number): number => { return num + 1; } In this specific example, the input variable “num” is explicitly stated that it must be the type “number,” which appears after the colon. The second “number,” is stating that the return type of the function must be a number. If you were to call this function in a TypeScript program in such a way where the input type is not a number, or the return of the function is not being used as a number, then an error will occur before it is compiled into vanilla JavaScript. Furthermore, in many IDEs (Integrated Development Environments) a linter will actually warn you of this error while you write it out. TypeScript Example: Interface Another prime case of using TypeScript effectively is using the type “Interface.” The main purpose of an Interface is to type-check the shape of a value; this can be referred to as “duck typing” or “structural typing.”² When would something like this be useful? In many cases, when you are dealing with objects in JavaScript, you will call on certain values within an object. However, when dealing with the values of a key within an object variable, many times programmers will run into an “undefined” error. This is because they attempt to manipulate the value of a key of which is not contained within that object. As an example, take the below code: const person = { name: “Erin”, }; return person.age.toString(); In this instance, we are calling on the “age” key which is not inherently contained within the “person” object. If someone were to run this code as-is, while the IDE will most likely not show this error, once it is run, one would discover a common error: TypeError: Cannot read property ‘toString’ of undefined. From viewing this code in this instance, it is obvious to us that this would happen. However, what if the toString function was only called sparingly within a Node server or in a React application? The programmer would never see this error until this function was called in an edge-case, at which point it has already created a poor user experience if this was not caught in testing. This can especially happen when objects are manipulated within functions. TypeScript Interfaces would help with this type of problem, by allowing us to define what a “Person” should look like. Take the following TypeScript code: interface Person { name: string, age: number } const erin: Person = { name: “Erin” } return erin.age.toString(); In this instance, while the linter will not pick-up in the toString error that is obvious here, it will tell us that the variable “erin” is missing a property “age;” the error would look like this: Property ‘age’ is missing in type ‘{ name: string; }’ but required in type ‘Person’. In addition, you also define what the type of each property should be. As you can see from this case, being able to define what an object can look like ahead of declaring it will help alleviate many undefined errors that could occur. Some Statistics Let’s take a look at one real-world example of TypeScript being used in a large tech company. AirBnB is one such business that has made the use of TypeScript and its features. As noted in the photo below, when AirBnB’s code was analyzed it was determined that they could reduce their bug-count by about 38%. Considering the size of their code-base, this is a significant margin. Perhaps an important question to be asked overall when learning a new technology is how popular it is currently, and how much that popularity will grow and be in-demand. It is no secret that TypeScript is becoming much more popular over time. Between 2016 and 2018 alone, TypeScript rose from 20.8% to 46.7% on Google Trends.³ Conclusion As can be seen from these specific examples, TypeScript is an extraordinarily useful tool when working with a JavaScript application. Some might argue that TypeScript is noticeably more verbose to code than vanilla JavaScript, and that it is not worth the time and effort to add these features to your application. Perhaps with very small applications, this might be the case, as the time debugging the code could be less than adding in all the necessary types. However, even for medium-sized applications, TypeScript is incredibly useful to make sure parts of your application are safe from errors, as well as keeping your code organized and uniform. This is especially the case when working with other people, and types and “shapes” of your code need to remain consistent and decided upon prior to putting pieces of it together. There are many costs and benefits of using TypeScript, mainly the cost of time coding, the amount of bugs that can be avoided by using it and how critical program-correctness is required. However, for most cases of larger JavaScript programs, programmers will find the use of TypeScript to be well worth any extra syntax needed for it. [1] Sebastian Peyrott. (January 16, 2017). A Brief History of JavaScript https://auth0.com/blog/a-brief-history-of-JavaScript/ [2] TypeScript Documentation. https://www.typescriptlang.org/docs/handbook/interfaces.html [3] Saurabh Barot. (July 19, 2019). Why Developers Love to Use TypeScript in 2021? https://aglowiditsolutions.com/blog/why-use-TypeScript/#:~:text=Popularity%20of%20TypeScript,from%2020.8%25%20to%2046.7%25.
https://zacharyernst.medium.com/the-case-for-adding-typescript-to-your-javascript-applications-f4ee6599e003
['Zachary Ernst']
2020-12-14 23:39:38.173000+00:00
['JavaScript', 'Typescript']
From Novice to Expert: How to Write a Configuration file in Python
Which format of the configuration file should I use? In fact, there are no constraints on the format of the configuration file as long as the code could read and parse them. But, there are some good practices. The most common and standardized formats are YAML, JSON, TOML and INI. A good configuration file should meet at least these 3 criteria: Easy to read and edit: It should be text-based and structured in such a way that is easy to understand. Even non-developers should be able to read. Allow comments: Configuration file is not something that will be only read by developers. It is extremely important in production when non-developers try to understand the process and modify the software behavior. Writing comments is a way to quickly explain certain things, thus making the config file more expressive. Easy to deploy: Configuration file should be accepted by all the operating systems and environments. It should also be easily shipped to the server via a CDaaS pipeline. Maybe you still don’t know which one is better. But if you think about it in the context of Python, then the answer would be YAML or INI. YAML and INI are well accepted by most of the Python programs and packages. INI is probably the most straightforward solution with only 1 level of the hierarchy. However, there is no data type in INI, everything is encoded as a string. The same configuration in YAML looks like this. As you can see, YAML supports nested structures quite well (like JSON). Besides, YAML natively encodes some data types such as string, integer, double, boolean, list, dictionary, etc. JSON is very similar to YAML and is extremely popular as well, however, it’s not possible to add comments in JSON. I use JSON a lot for internal config inside the program, but not when I want to share the config with other people. TOML, on the other hand, is similar to INI, but supports more data types and has defined syntax for nested structures. It’s used a lot by Python package managements like pip or poetry. But if the config file has too many nested structures, YAML is a better choice. The following file looks like INI, but every string value has quotes. So far, I’ve explained WHY and WHAT. In the next sections, I will show you the HOW.
https://towardsdatascience.com/from-novice-to-expert-how-to-write-a-configuration-file-in-python-273e171a8eb3
['Xiaoxu Gao']
2020-12-28 14:25:03.409000+00:00
['Python', 'Software Development', 'Programming', 'Data Science', 'Coding']
Ordinary People Worship Billionaires for One Simple, Flawed Reason
Being a Billionaire Sounds Amazing. Imagine if you woke up tomorrow with billions of dollars suddenly at your fingertips. If you hated your job, you could finally quit. You could tell your boss exactly what you thought about them. You wouldn’t have to worry about the future anymore, at least not half as much as you do right now. You’d never worry about bills again — or health insurance. You’d never worry about paying your rent on time, or shoplifting just to feed your family. You could stop feeling so desperate all the time. You could finally relax. Stress would fall off you like dust. Nobody could tell you what to do anymore. You’d have enough money for your children, and their children. You could buy a bigger house, and pay someone to clean it. You could drive exactly the kind of car you wanted. You could make the guy or girl of your dreams fall in love with you, at least for a little while. This is how most people think about wealth. It doesn’t matter if most of it isn’t true. Only some of it has to be.
https://medium.com/the-apeiron-blog/ordinary-people-worship-billionaires-for-the-wrong-reasons-64fcaee27ef3
['Jessica Wildfire']
2020-12-16 02:45:50.775000+00:00
['Politics', 'Society', 'Equality', 'Culture', 'Money']
Explaining the revenue hockey stick in funding presentations
One of the last pages in every venture capitalist funding presentation is the “hockey stick” of revenues and profits shooting through the roof. VCs expect a bit of optimism from a startup, but at the same time want to do a reality check on these numbers. I often use a company snapshot, a tree of the key factors that add up/multiply to the projected revenue figure. Make sure the factors are real things you can touch: people, visits, etc. CORRECTION: Also make sure the tree adds up and calculates through, something that cannot be said about the example below.
https://medium.com/slidemagic/explaining-the-revenue-hockey-stick-in-funding-presentations-d4c8d1c59bf0
['Jan Schultink']
2016-12-27 08:40:57.898000+00:00
['Presentation', 'PowerPoint', 'Presentation Design', 'Design', 'Data Visualization']
9 Hard Lessons I Struggled to Learn During My 18 Years as a Software Developer
1. Leave the Ego at the Door Developers have huge egos. That’s a fact. Why, though? I would argue that anyone who takes our profession seriously would consider themselves to be somewhat of an artist. Yes, we might not be singing in front of millions of people or painting the Mona Lisa, but we’re sometimes writing code that solves very complex problems in such an elegant and efficient way that we can’t help but be proud of our work. I would argue that a developer is just as much of an artist as a mathematician is through the way they solve their problems. Because of this, we tend to crawl around our code — just like a mama bear looking after her offspring. We made it, we love it, and we can’t stand when people start arguing about how wrong it may or may not be. Then again, this is not helping anyone. We love our work, but we need to understand that we’re solving problems. By discussing our ideas and our solutions with others, better alternatives might arise. There is nothing wrong with that. In fact, cooperation is normally what yields the best solutions. I’ve seen all kinds of egos in my time, and I’ve never seen a case where that ego worked in the developer’s favor. So my advice here? Leave the ego at the door the minute you start working as a dev. Just swallow it and hear what others have to say about your work. Accept that better ideas might come from outside your head and that they will only help you improve your skills. You can only win if you listen to feedback.
https://medium.com/better-programming/9-hard-lessons-i-struggled-to-learn-during-my-18-years-as-a-software-developer-14f28512f647
['Fernando Doglio']
2020-12-07 16:46:30.823000+00:00
['Programming', 'Software Development', 'Technology', 'Lessons Learned', 'Startup']
Assessing Global Health, One 📈 at a Time
The Institute for Health Metrics and Evaluation website is a treasure trove of data. On a semi-regular basis, the institute publishes data visualizations using the daily freely available on the website. While the data is “freely available” on the charts, most of it isn’t downloadable or “unlocked” as such. By re-plotting it in Plotly, we “free the data,” so to speak, opening it up to future investigation as other users can easily download it, play with it, and explore it. In this post, we present five charts that speak volumes about the state of health around the globe, examining in particular key areas such as healthy life expectancy, the prevalence of overweight adults, smoking prevalence, high-risk drinking prevalence, and deaths in the United States. Plotly graphs are now embeddable in Medium, as demonstrated below. To learn how to embed can’t-miss Plotly graphs in Medium, check out our how-to post. Make sure your Plotly graphs look as sharp on your desktop computer as they do on your mobile device — our post on mobile charts has the details. Healthy Life Expectancy Did you know? With an average healthy life expectancy (HLE) of nearly 74 years, Japan leads the world. On the flip side, the Kingdom of Lesotho, a landlocked country in southern Africa, holds the distinction for having the globe’s lowest average HLE at 41.19 years. Iceland has the 2nd highest average HLE at 72.95 years and the Central African Republic the 2nd lowest at 43.19 years. 2. The Prevalence of Overweight Adults Kuwait holds the distinction of having the highest prevalence (79%) of overweight adults in the world (BMI ≥ 25 for adults aged 20+). The overweight and obesity epidemic has been investigated in this journal article. Despite this, Kuwait sports a fairly average HLE of 69.56 years. Egypt also has a near-global high prevalence of overweight adults at 70.9%. Perhaps this high value can be attributed to a high carbohydrate and calorie diet, as speculated about by The Guardian. To the contrary, Burundi in south central Africa, has the lowest prevalence of overweight adults at 6.3%. 2a. The Prevalence of Overweight Adults vs Healthy Life Expectancy China and the U.S. have very nearly the same HLE, but being overweight is 38% more prevalent in the states. Meanwhile, the Philippines has an overweight prevalence of 26.9% but an HLE of just 62.05 years. HLE is likely more closely linked to things like local conditions, reliable access to healthcare, and nutrition than it is to being overweight. 3. Daily Smoking Prevalence At 45.6%, Greenland has the highest prevalence of smokers in the world. 14% less prevalent than Greenland, Macedonia has the second most smokers. Meanwhile, smoking is least prevalent in Panama, with only 3.5% of the population identifying as smokers. A smoker in Panama would have to spend 13.1% the of national median income to purchase 10 of the cheapest cigarettes to smoke each day. 3a. Daily Smoking Prevalence vs Healthy Life Expectancy India has a daily smoking prevalence of 9.8% while Greece is sky high at 31.2%. Despite this, India has an HLE of 58.12 years and Greece 71.13. 4. High Risk Alcohol Consumption Belarus in Eastern Europe is home to the world’s heaviest drinkers. High risk alcohol consumption is pegged at 28.7%. It is speculated that 80% of murders and grave injuries in Belarus are committed under the influence of alcohol. Only 11% of Belarusians completely abstain from alcohol. Mauritania in western Africa has the distinction of having the least prevalent amount of high-risk drinkers — this is because it is a dry country. 4a. High Risk Alcohol Consumption vs Healthy Life Expectancy It likely isn’t a coincidence that the three countries where high risk drinking is most prevalent (Russia, Ukraine, Belarus) are not setting healthy life expectancy records. 5. Deaths by State Prevalence of all-cause death in the U.S. is highest across the mid-south, Southeast, and Appalachia. It is interesting that the region is well aligned with the CDC-identified diabetes belt. Within the diabetes belt, 11.7% of the people have diagnosed diabetes. Outside the belt, 8.5% have diagnosed diabetes. 6. Crossfilter Crossfilter is a visual analysis technique for multidimensional data that can help clarify correlations between dimensions. To use crossfilter, simply click-and-drag on any of the charts and maps in this dashboard. Data sharing common rows between the charts will highlight in red, helping pinpoint complex relationships between various health indicators and locations on the globe. Crossfilter in action. Link to dashboard. You can learn more about crossfilter and try it for yourself here.
https://medium.com/plotly/assessing-global-health-one-at-a-time-d061a144a421
[]
2017-10-31 15:59:12.215000+00:00
['Dashboard', 'D3js', 'Data Science', 'Tableau', 'Data Visualization']
On Privilege and Original Sin
The forbidden fruit taken by Adam and Eve in the Garden of Eden certainly seems like it must have had a sour taste. For indulging their appetites a little, the two brought sin and death into the world, got kicked out of paradise, and were each assigned their own specially frustrating labor projects. Bible commentators down through history have noted an obvious lesson here: sometimes what we learn through experience is pretty bitter. Thanks to the actions of Adam and Eve, we all have been corrupted and stand in need of salvation… or so the story goes. Some thinkers on the right and the left have likened the concept of privilege to that of Original Sin. Both are things we are born into, that we cannot escape, and are meant to be dealt with through a confessional or penitent approach. Authors James A. Lindsay and Peter Boghossian draw this comparison in their article, Privilege: The Left’s Original Sin. There is no greater sin in the eyes of the left, they claim, than “having been born an able-bodied, straight, white male who identifies as a man but isn’t deeply sorry for this utterly unintentional state of affairs.” Part of what makes this analogy compelling is that some similarities do exist between privilege and Original Sin. Yet there are also plenty of concepts like apostasy, faith, and even religion itself that routinely get tossed around with secular ideas in ways that are more often tenuous than they are convincing. There have been many contrived arguments comparing a belief in science to faith or labeling atheism as another religion. The more the sacred retreats from this picture, the weaker these comparisons start to seem. In many cases, these types of arguments function as little more than an accusation of hypocrisy designed to stymie conversation. But in other instances, they may be an exercise in guilt by association. Presumably, though, the main complaint here is not the religious connection, but how privilege and Original Sin have both been used as shaming devices. Certainly, privilege talk can be used to try and control or stop conversation. In that sense it is comparable to Original Sin as employed by brazen preachers spreading a message of hellfire and brimstone. However, where many on the right have seen privilege as a personal attack, many on the left have been endorsing it with the aim of calling attention to broader social issues. Mychal Denzel Smith, writing for The Nation, observes that when people with privilege hear that they have privilege, what they hear is not, ‘Our society is structured so that your life is more valued than others.’ They hear, ‘Everything, no matter what, will be handed to you. You have done nothing to achieve what you have.’ Apology and repentance are not the goals for those who call attention to privilege — social reform is the goal. In their article, Boghossian and Lindsay offer discrimination as a better alternative to privilege. This may be splitting hairs, but it may also underscore a valuable point. Discrimination has a history behind it, especially a legal one, and it has often been addressed on an isolated, individual-case basis. To suggest that there are more systemic problems in our courts, in our neighborhoods, and in our society, a bigger word seems necessary. Privilege stings. It evokes an air of elitism, of undeserved benefit, and it plays off the anti-magisterial sentiments that have long been a part of American culture. Privilege is less visible than we imagine discrimination to be. It saturates and it structures, as Maggie Nelson has written. Granted, privilege has its conceptual flaws, too. It’s been argued that it associates the advantages of privilege with luxuries rather than with rights. Others have suggested that it’s not very conducive to understanding differences among various minority groups. Of course, these are conversations worth having, and they have been ongoing in many areas of social justice for some time now, but they are not the basis of this critique. Boghossian and Lindsay are willing to give a modest bit of credit to the term, conceding that it does describe something real and problematic. What they object to is how privilege helps to “glorify” the struggles of certain identities lucky enough to be born into the right group, while serving as a club to beat on those born into the wrong group. True, there is a problem with how often so-called identity politics centers on privileged groups like Hollywood or university campuses rather than groups that have historically been far more marginalized. But privilege talk also enables us to identify this problem as one of privilege, and if social reform is the impetus behind such talk, then these concerns are some of the focus for change. Oddly, after explaining that “everybody is privileged,” and that Original Sin and privilege are identical except in that they inhabit different moral universes, Lindsay and Boghossian contend that a distinguishing difference between the two is that the label of privilege is even more contemptible because it’s seen to be a hindrance to the less fortunate among us. But everybody is privileged, so who can rightly take the moral high ground? Some might still claim the moral high ground, though there’s no real explanation for why this would be tolerated more in the case of privilege than in that of Original Sin. Putting privilege into context doesn’t mean forcing repentance. What if, on the other hand, these commonalities between Original Sin and privilege are actually due to the perception of a confrontational attitude rather than to any conceptual similarity? There are Christians for whom Original Sin is not a weapon with which to persecute unbelievers, but a reminder to themselves to be humble and forgiving of others. In Romans 3, Paul considers the standing Jews and Gentiles have before God. “Do we have any advantage?” he asks. “Not at all!” No one is righteous, not even one, as he goes on to declare in verse ten. Could privilege not serve as a similar reminder to humility? It’s true that no analogy is perfect, but Boghossian and Lindsay are ambiguous enough in their use of the term privilege that it presents a problem for their argument. Let’s take a definition of privilege by Sian Ferguson at Everyday Feminism. Ferguson says, “We can define privilege as a set of unearned benefits given to people who fit into a specific social group.” This doesn’t tell us anything about most of what Lindsay and Boghossian attribute to privilege, such as it being an accident of birth, inescapable, applicable to everyone, or demanding atonement. I would argue that this is because these are ancillary ideas about the function of privilege in society. Just as the concept of sin differs from the doctrine of Original Sin, the concept of privilege differs from the political, social, and philosophical theorizing that has surrounded it. The problem is that if we’re going to bring in these ancillary ideas about privilege in drawing a connection to Original Sin, why stop here? Boghossian and Lindsay try to conceal the breakdown of their analogy with the line of qualification stating that Original Sin and privilege inhabit different moral universes. It allows them a little leeway to conveniently gloss over major incongruities like the importance of power systems for understanding privilege, or the ancestral nature of Original Sin that has it inherited from generation to generation. The doctrine of Original Sin assigns responsibility or sinfulness based on the sins of the parents. Because Adam and Eve sinned, all human beings are said to be sinful. Privilege doesn’t actually function in this way, although it’s notable that there is a common misconception tied up with it about what exactly people are being “accused” of. The familiar example of this misconception might be when a white person complains that ‘just because’ their ancestors benefited from slavery doesn’t mean they should have to feel guilt over it. Usually, what’s being asked for isn’t a simple confession of guilt, as noted. More to the point, privilege is meant to be about socially situating certain narratives, actors, and actions. These social and cultural relations exist in the here and now, even if they have been inherited in one form or another from the actions and institutions of previous generations. They are not like one couple’s mistake committed centuries ago, for which we are still paying. Often times, dismissing privilege as a new form of Original Sin has the effect of whitewashing the past and the way it has shaped and influenced the environments we inhabit in the present. Original Sin has long been recognized by Christian thinkers for having somewhat of a mysterious mechanism behind it. Scripture doesn’t say how sin is transmitted from one generation to the next — it’s long been debated whether Original Sin as a concept even shows up in the Bible at all. How Adam’s sin affected humanity as a whole is something of a supernatural mystery in its own right. By contrast, it can reasonably be said that social institutions and social relations sometimes or often operate in ways that are unclear or confusing. But is it ever really that mysterious? There are competing theories and explanations at play, but these are subject to empirical evaluation and there is frequently a good deal of overlap and consensus in these areas. So it’s hard to seriously argue that there is a true mystery here. We have so many strong historical examples of how dominant groups and those in positions of power have discriminated against others and established their own systems of advantage for themselves that I think playing the mystery card (which is partly what this analogy does, wittingly or unwittingly) borders on willful ignorance. The emphasis on guilt is also greatly overstated when it comes to privilege. Consider that we have one particular political party in the U.S. that seems to collectively react to just about any push for greater civil rights on behalf of certain marginalized groups as if it’s demanding that they wear a large millstone of guilt around their neck at all times, and I think this becomes easier to appreciate. As already acknowledged, there are sometimes those who choose to focus on shaming others for their privilege. But an intersectional approach seems a far better response to this than one based in anger or resentment. Especially if the aim of drawing attention to privilege is not to ‘convict people of their sins,’ but to raise awareness and work towards social change. That will be a hard goal to achieve if most of what we’re doing is shaming and putting down one another, though arguably much less difficult if we’re working from a position of intersectionality that recognizes privilege comes in a variety of different forms. Ironically, then, there’s one characteristic that people talk about as a similarity between privilege and Original Sin — that everyone is privileged — that seems to undermine another aspect of that analogy: that blame or guilt is its purpose. I’m not sure why we should feel persuaded by the criteria of similarity raised by Boghossian and Lindsay. They seem as if they’ve been cherry picked, but their significance can be questioned, too. Death is something we have no say over, it cannot be escaped, and it’s been said that all of us are dying from the moment we’re born. Yet we might question the utility of comparing death to Original Sin, or to privilege, on such grounds. It could be claimed that death isn’t as comparable for some reason or other, but we have just seen a few ways in which privilege isn’t as comparable, either. Some analogies are not made on the basis of strong similarities, but are put forward and propagated with other motivations in mind. Given how much criticism Original Sin has attracted both for implying a sense of undeserved guilt and for frequently appearing in the intrusive evangelizing practices of many Christian ministers and missionaries, it really isn’t much of a stretch to suppose that making use of it in an analogy may serve primarily as a means of disparaging and dismissing someone’s legitimate resistance to injustice and criticisms of oppression. When it comes to our own privilege, we usually aren’t exactly eager to own up to things. I can honestly admit that I still struggle with this. Parul Sehgal writes about this in article on How ‘Privilege’ Became a Provocation: It’s easier to find a word wanting, rather than ourselves. It’s easy to point out how a word buckles and breaks; it’s harder to notice how we do. The first sin wasn’t being born into a certain class or identity. It wasn’t being part of a majority group that benefits from the marginalization of others. The first sin was arrogance. It was selfish pride that motivated disobedience, as Thomas Aquinas says in his Summa Theologica. I agree wholeheartedly with Boghossian and Lindsay that more perspective, kindness, and charity are needed. However, it seems to me that their critique of privilege has missed the mark in a number of ways. There is room for improvement, especially in how we talk to and treat the disadvantaged, but the encouragement given to “focus more on the positive qualities” you want to instill in others rings quite hollow. This advice makes sense on an account of injustice that centers on individuals and individual acts of discrimination, but it is ill-equipped to address broader social and systemic forms of injustice. Perhaps this is where the critic has more in common with religion than he likes to think. It would be an understatement to say that monotheistic religions haven’t had very good track records of protesting privilege. On the contrary, they’ve often put in a great amount of effort defending their own privilege against so-called heretics, apostates, the ‘morally wayward,’ and even sometimes members of their own flock, despite the social justice efforts of many churches and denominations. Perspective, kindness, and charity seem mismatched to the disdain for what Lindsay and Boghossian call the religion of identity politics. It’s telling where all the faith-based imagery is located in the picture painted by the two authors, and their contempt for religion is more than evident from their own writings, one of which bears the charitable title of Everybody is Wrong About God (presumably the author is the exception to the title, but apparently that goes without saying). “You don’t get to denounce identity politics,” as Sincere Kirabo points out, “when your monomaniacal depreciation of all things religious is literally grounded in homage to the politics of your most treasured identity: atheism.” Not everything religion has taught is worthy of derision — especially when it comes to the idea that change must begin with ourselves. There is likewise nothing patently religious about seeing ourselves as benefiting from certain social structures that disadvantage others. Privilege talk that fails to recognize the need for humility and compassion in striving for justice is talk that is rightly criticized. At the same time, a critique of privilege that cloaks its main argument in anti-religious and politically loaded rhetoric is not doing anyone the favors its writers think it’s doing.
https://taylor-carr.medium.com/on-privilege-and-original-sin-f06b811284b7
['Taylor Carr']
2019-02-07 19:47:49.721000+00:00
['Social Justice', 'Politics', 'Society', 'Privilege', 'Religion']
How Does the Internet Work?
Introduction Everyday we access the computer, jump onto our favorite websites and without thinking are surfing hundreds of pages of information. The internet is an amazing and powerful tool at our disposal, whenever we choose to use it. Without thinking we tend to take the internet for granted every single day. In this article we will dive deeper into the inner-workings of the internet. Before starting your development journey it is good to understand how the internet works. Think of it this way, before you start to create something for a platform you want to understand how the platform works. The in’s and out’s, the good and the bad. Understanding this will allow you as a developer to better create websites and web apps that will thrive. So what exactly is the internet? The internet is a networking system. Imagine the internet as a long piece of wire that connects different computers to each other. One computer can be located in Arizona, and the other in New York. Through this wire, data can be delivered from one computer to another. These computers that users would power on to access the internet is called, the client. Some of the computers on the wire are not like the others. They have a special job. They have to be online 24 hours a day, 7 days a week. What’s the saying? No rest for the wicked. These special computers serve the requested data to a client. So what is this special computer? It is called a server. Fitting, don’t you think? Think of a server as an extensive library, open 24/7 waiting to hear from you, the client, so it can serve you with all of the answers and cute animal pictures you can handle. Imagine if you were able to physically walk into this massive library of information. This could be overwhelming and take a really long time to find what you are looking for. How does the internet solve this problem? When you, as the user accesses a website for example Facebook.com , your browser will deliver a message to your internet service provider (ISP). Your internet service provider are companies like Comcast or Cox who provide internet to your residence. When your browser sends your ISP the message that you, as the client would like to access Facebook.com, the ISP will then deliver that message to a domain name server or DNS. A DNS essentially is the phonebook of the internet. When it receives the message from the ISP, it will look through its database to find the exact IP address of the website you are trying to reach. Now this leads us to our next question. What is an IP address? Every computer has a unique IP address. A great example of this would be our ID cards. We each have one and each of our ID cards has a unique ID number. When a client is requesting data each computer can be located by its unique IP address. Once the DNS finds the right IP address for Facebook.com it can then send it back to your browser. Now that you have the exact IP address where you can find Facebook’s homepage, you can now relay a direct request to that address through your internet service provider. This request will be delivered through the internet backbone a.k.a. our wire. The internet backbone are comprehensive underwater cables that run throughout the world. These cables power the internet and connect all of the worlds internet users. You can view the internet backbone at submarinecablemap.com. Once you have the IP address you want to access, the browser will send another message through the internet service provider via the internet backbone to the server that is located at 31.13.64.35, the IP address you are trying to access, in this case, Facebook. The computer that is located at that address is the Facebook server. On this server are all of the files needed to view Facebook’s homepage. The server then sends all of the information back to the client via the internet backbone in a matter of milliseconds, where now the Facebook homepage is displayed for the client. This is the power of the internet. I challenge you to open a browser and type in 31.13.64.35, Facebook’s IP address so you can see the homepage delivered to your screen.
https://medium.com/vol-1-technology-software-development/how-does-the-internet-work-a7d5aa1843a
['Brittney In Beta']
2020-11-08 18:50:39.236000+00:00
['Coding', 'Technology', 'Programming', 'Software Development', 'Software Engineering']
Three Signs of a Good Personal Trainer
#1. An initial assessment A personal trainer should be just that — your personal trainer, and the sessions your trainer puts you through should be designed around your personal needs. Before your first session together, your trainer should be performing an initial assessment to identify what exactly those needs are. An initial assessment may include: Asking about any medical/injury history Establishing your specific goals (i.e. Do you want to lose weight? Gain muscle? Why?) Identifying any mobility restrictions you might have Using this information to design your individual training program Too many trainers will lay out the same cookie-cutter program for the 19 year old football player as they do the 72 year old with a history of osteoporosis. If your program isn’t built specific to your needs, keep on looking. #2. Having a structured plan There’s a difference between “training” and “working out”. Training for something involves laying out a plan with an end goal in mind. For example, if your goal is to lose 10 pounds of fat and add 5 pounds to your bench press in 12 weeks, your programming is going to reflect that goal. On the other hand, putting someone through a serious of random, aimless workouts just for the sake of getting sweaty is lazy personal training. How many times have you heard somebody brag about their coach/trainer “kicking their ass” in the gym? That’s because many newcomers associate feeling exhausted with a productive workout — but that’s not always the case. A five year old could tell you to do 1,000 burpees and you’d be hooked up to a defibrillator before you could finish. It would “kick your ass”, but what did you accomplish? Are you any closer to your goal than you were before the workout? Do you even know what your goal is? (See: Initial assessment) When there’s no clear, defined goal established with your training; that’s a tell-tale sign of a trainer that doesn’t know how to properly structure a program. You — and your hard earned money— deserve better than that. Make sure they have a plan. #3. Education A trainer/coach that takes themselves seriously is going to invest in themselves. Not every personal trainer has to necessarily have earned a bachelor’s degree in Exercise Science, but they should have invested in some form of education to provide themselves with the requisite knowledge needed to work with clients before doing so. This typically comes in the form of a personal trainer certification program. Some of the most well known certifications are: ACE — American Council on Exercise — American Council on Exercise CPPS — Certified Physical Preparation Specialist — Certified Physical Preparation Specialist NASM — National Academy of Sports Medicine — National Academy of Sports Medicine ACSM — American College of Sports Medicine — American College of Sports Medicine ISSA — International Sports Science Association One caveat: As someone who has earned multiple certifications over the years; I can tell you from experience that, while having an education is one thing, knowing how to apply that education in the real world — to people of different shapes, sizes, ages, and personalities — is a whole different ballgame. For that reason, I’d like to point out that while certifications or degrees of any level are definitely not the “be-all-end-all”, they’re still of value, because it shows that your trainer has at least invested in themselves enough to achieve a rudimentary level of education before working with clients. That’s a sign that they take themselves and their brand seriously — which likely means they’re going to take you and your results seriously, too. Make sure your trainer has invested in themselves before you invest in them. Image by Darren Constance from Pixabay There are many variables that make up a great personal trainer, but these three fundamental boxes should be checked before you invest your time, money, and health into one of them. Find someone who assesses your needs, establishes your goals, has a plan to help you reach those goals, and has invested in an education — your body will thank you.
https://zackharris.medium.com/three-signs-of-a-good-personal-trainer-d8689d17264
['Zack Harris']
2020-03-18 00:32:09.899000+00:00
['Fitness', 'Health', 'Lifestyle', 'Coaching', 'Self Improvement']
BigTips: Random Numbers and Random Dates
When working with and implementing BigQuery, there’s a number of small problems to which I couldn’t find documentation to, or a working example of a solution. This occasionally happens with any database. While these won’t probably be groundbreaking problems to solve, hopefully it’ll make someone’s day a little easier. Sometimes, it’s the little things. BigTips: Generating random numbers in a range, and random dates. This one will be pretty quick, and I’m filing under, “Why Isn’t This Simple Thing a Thing?” Random Numbers In A Range This whole thing really started when I needed to generate some test data, and noticed that BigQuery’s RAND() function doesn’t have upper and lower bounds. It’s not a complex thing, but one of those minor things that’s just easier to copy and paste from somewhere, so here it is so you can just copy and paste it. Random Dates in a Range This then led to my next minor headache when trying to generate test dates for something. Pretty simple thing, again. If you want to reuse the random number in a range generator, it’s super simple and you can just call that after converting everything to POSIX time. In that example, we cast the result of the rand_date() function to a DATE type for illustrative purposes. The function will return a TIMESTAMP that’s random, but then the count for each distinct value is usually 1 or 2, which isn’t too interesting to look at. If you don’t need random numbers and only need random dates, and don’t feel like spending time to redo the function, here you go. There you have it, a quick tip that hopefully helps make someone’s BigQuery day a little bit easier! Also be sure to check out more BigQuery stuff in the Google Cloud Medium Channel! Happy Querying!
https://medium.com/google-cloud/bigtips-random-numbers-and-random-dates-84da7c309c3d
['Brian Suk']
2020-12-10 15:44:17.185000+00:00
['Google Cloud Platform', 'Analytics', 'Big Data', 'Bigquery', 'Data']
The first programming language you should learn… A debate…
Set the Stage Both JavaScript and Python offer a wide range of features and have extensive, amazing communities behind them. We are going to delve into the technical and professional aspects of both languages while avoiding some of lower-level technical details. In doing so, I hope to paint a picture of which language you should choose based on your preferences and personality. We will compare these languages on only two key aspects: learning curve and utility/use-cases. 1. Learning Curve JS and Python both have low learning curves and are quite easy to pick up on. Both are dynamically typed which helps beginners tremendously. Python is currently embracing type hints, but these are not enforced and runtime. Similarly, JS has a language subset called TypeScript which enforces types on all objects but JS itself does not. Speaking of, both languages follow OOP principles which is another plus for learning since objects are a great way to relate abstract coding structures to real-life structures. One downside to Python is that it requires installation and Python versions change relatively frequently. Managing Python versions is a known headache for any Python dev, but there are many packages out there to help combat this issue including conda, poetry, and virtualenv. In order to run Python scripts you either need to utilize Jupyter Notebooks (which require installation) or utilize a terminal and code editor to write code (yes these could be the same via vim/nano). The Anaconda installation does install VS Code, Jupyter Notebook, Python all at once for users and is available on all platforms. JS, on the other hand, can be written directly in your browser simply by navigating to the Chrome Developer Tools. This makes it very user friendly since you can actually just code in your browser and see the changes directly happen on a web page. No installation required. This principle does break down once you start installing node packages and using frontend frameworks, but for Vanilla JS, it couldn’t be easier to get started. As far as syntax goes, both languages are simple to understand and get used to and any code editor provides great support for both. Arguments could be made for Python over JS on the syntax front, but I think they are simply different and that, with editor support, neither would provide a large hinderance to getting starting. On second thought, I would select Python here due to ignoring curly braces and no semi-colons despite its whitespace issues. Especially considering that advanced frontend frameworks like the popular React library or Angular utilize ES6 syntax which can be quite confusing at times. 2. Utility / Use Cases The use cases for these languages are where they really differ. This also bleads into job prospects. Python is excellent for data analysis, data engineering, data science, one-off scripts, automation, machine learning, and backend web development. Javascript is excellent at nearly everything on the web from frontend styling and animation to backend frameworks and interacting with databases. The simplest way I can explain the difference here is this: Python works many places, JS works on the web only. If you want to build on the web only, the choice is easy: JS. If you want to build small games, desktop apps, software, or do data related tasks, choose Python. Interestingly, Python can do some of the things JS can do regarding web development backend. In fact, Python’s two most popular web frameworks Django and Flask run many popular web backends. JS has its own backend frameworks including Express.
https://medium.com/the-innovation/the-first-programming-language-you-should-learn-a-debate-93611b06acd2
['Nick Anthony']
2020-11-05 18:00:19.059000+00:00
['Programming', 'JavaScript', 'Data Science', 'Python', 'Web Development']
Does your business need a CRM? 10 Warning Signs
Although Customer Relationship Management (CRM) software has become a burgeoning phenomena, there is no wisdom in simply jumping on the bandwagon and investing in CRM technology. Firstly, you should know what CRM software is and how it will impact your business operations. Secondly, you should look for the following ten warning signs which might indicate that it’s about time you get a CRM software. 1. You are still using excel sheets to run your business Using manual processes like spreadsheets result in business operations being highly inefficient. It wastes precious time of your employees doing repetitive tasks which leads to a decline in productivity. Also, employees these days find working on Excel sheets extremely boring. Your business can take a lot of mileage by using a CRM as it eliminates the need to do repetitive tasks manually through automation. Some of the most common applications of workflow automation in a CRM are marketing automation, sales automation and customer support automation. All these will make your business hours more productive and fruitful by finishing routine tasks for you without any hassle. 2. You do not have a single point of consolidated and centralized customer data The more information you have about your customers, the better strategic decisions you can make. However, a plethora of information is of little value if it is being stored in Google sheets, business cards, handwritten notes etc. A CRM can give you a 360 degree view of your customers by allowing customer-facing employees across different departments maintain every interaction they have in one central database. Information could be accessed and updated in real-time. 3. Your salespersons have insufficient knowledge about your customers As your customer base grows, it becomes increasingly difficult for salespersons to remember each and every detail related to a customer since there’s no database to give you instant consolidated information about the customers. If customers get the slightest hint that they are being neglected, they will simply walk away. That will be downright damaging for your business. With a CRM in place, salespersons will have all the relevant data about the customers at their fingertips e.g past calls, meeting, emails etc. They will know who the customer is and what product he is interested in based on the past interactions. Hence, a CRM will enable the salespersons to serve the customers in a more personalized manner. 4. You are treating all customers in a similar fashion The need to know your customer base for targeted marketing has become more important than ever. If you are targeting every customer with the same promotional content, you are making a big mistake. Every prospect needs individual attention depending at which stage they are in the sales life-cycle. A CRM system gives you the ability to segment leads based on their interests, needs, preferences, demographics, industry vertical and more. It helps salespersons in planning effective strategies to move the leads from one stage of the sales funnel to another quickly. 5. Your marketing and sales departments lack coordination The marketing department is responsible for nurturing leads whereas the sales department is accountable for closing deals. Both need to work in harmony to ensure that leads are converted into opportunities and finally deals. With CRM, marketing and sales departments can now stay updated by having access to real-time data related to a customer’s profile. Marketing team can pass on the leads to the sales team without any manual effort. The sales team can then act on those leads and try to convert them into deals.
https://medium.com/business-startup-development-and-more/does-your-business-need-a-crm-10-warning-signs-d818d07a8299
['Phillips Campbell']
2017-06-20 12:29:45.920000+00:00
['Marketing', 'Business', 'Customer Service', 'CRM', 'Technology']
Conservatism Doesn’t Belong in the 21st Century
At its core, conservatism is a dogma of immobility and regression. It is turned towards the past and deterministic in nature. We need to preserve our traditions. We need to revert to an ancient golden age. We need to be great again. Our actions do not define our selves. Our actions are defined by who we are. It is an essentialist philosophy. Conservatism was born in the wake of the French Revolution as an aristocratic reactionary movement opposed to the democratic ambitions of the people, that frightening plebe that had cut short a thousand-year-old dynasty and its last monarch. The concept of aristocracy is rooted in the idea that people aren’t born equal and, consequently, conservatism believes that the persons’ identities cannot be matters of choice, but are conferred on them by their unchosen histories, so that what is most essential about them is…what is most accidental. The conservative vision is that people will come to value the privileges of choice…when they see how much in their lives must always remain unchosen. — Stanford Encyclopedia of Philosophy The epistemological consequences of this view that essence precedes existence is that: [O]ne cannot know the general principles whose implementation would benefit the operation of society [because] circumstances give every political principle its colour.— Ibid. This idea is flawlessly expressed in this quote by Margaret Thatcher: there’s no such thing [as society]! There are individual men and women and there are families. And no government can do anything except through people, and people must look after themselves first. … There is no such thing as society. There is living tapestry of men and women and people and the beauty of that tapestry and the quality of our lives will depend upon how much each of us is prepared to take responsibility for ourselves and each of us prepared to turn round and help by our own efforts those who are unfortunate. More fundamentally, Conservatives regard the radical’s rationalism as “metaphysical” in ignoring particular social, economic and historical conditions: I cannot [praise or blame] human actions…on a simple view of the object, as it stands stripped of every relation, in all the nakedness and solitude of metaphysical abstraction (Burke). — Ibid. Conservatism is based on the notion that the circumstances of the actor must be taken into account to judge their actions. We cannot assess the quality of the latter unless we know who the former is. Conservatism also rejects the validity of general, abstract political concepts because they obviously fail to account for each individual’s circumstances. Something that purports to be “for the good of all” can only be “against the will of each” and is therefore bad. As Thatcher said, There is no such thing as society, only individuals trying to improve their lots despite others, not together.
https://medium.com/curious/conservatism-doesnt-belong-in-the-21st-century-35afa41aaab5
['Nicolas Carteron']
2020-12-20 16:44:37.701000+00:00
['Politics', 'Society', 'Philosophy', 'Existentialism', 'Conservatism']
Connecting Pantone With Data Viz
Pantone Connect is a recently released app for selecting Pantone colors or for extracting Pantone colors from images on your desktop or mobile devices. Personalized color palettes can be digitally created from Pantone’s color library of more than 10,000 hues. In this writing, I discuss how the Pantone Connect app can be used to facilitate the process of creating data visualizations by combining it with other tools like Viz Palette and the Color Blindness Simulator — Coblis. During this journey, I will focus on four tasks:(1) creating a Pantone color theme for a specific data visualization; (2) using Pantone Connect to extract colors from an image to build a customized color theme; (3) integrating Pantone Connect with other tools to address color deficiencies and data visualization possibilities, and (4) identifying Pantone colors in an existing data visualization for high end printing or other physical output specifications. The user interfaces to the Pantone Connect web site and mobile app. In addition to the web site, a free version of the Pantone Connect Mobile app, with access to the Pantone color libraries, is available from the Apple’s App Store, Google Play and Adobe Exchange. The user interfaces to the web and mobile versions are shown above. Pantone requires you to open a free Pantone account in order to have access to the majority of features in the app. The Pantone Connect functions described here are free to use. In the future, some advanced functions may require a monthly or yearly subscription. Pantone is still building the Pantone Connect app and will be announcing further details about the product’s business model in the future. Both Viz Palette and Coblis are freely available online tools for your continued use. What is the Pantone Matching System? The Pantone Fashion Color Trend Guide for Autumn 2020/2021, see:https://www.pantone.com/color-intelligence/fashion-color-trend-report/new-york-autumn-winter-2020-2021 , Note: NYFW refers to “New York Fashion Week”. The Pantone Matching System (PMS) is a proprietary color space used primarily in printing and also in a wide range of other industries including fashion, cosmetics, fabric, plastics, and paints. PMS methods have evolved into a standardized color reproduction system that utilizes the Pantone number system to identify colors. Individuals at different geographic locations can refer to particular PMS codes to make sure colors match without making a direct personal contact with each other. The traditional Pantone Formula Guide. See: https://www.pantone.com/graphics. In traditional form, the Pantone color guides consist of narrow cardboard sheets (approximately 6 by 2 in or 15 by 5 cm) that are printed on one side with rectangular samples showing the different Pantone colors. The guide is bound together at one end to allow for opening the strips out in a fan like manner. Additionally, Pantone provides binders with rectangular swatches and digital media resources. In 2009, Pantone released the myPantone app for iOS platforms and later extended it to Android devices. In August 2016, the Pantone Studio app for iOS platforms replaced myPantone. Pantone Connect, released in June 2020, is their latest digital app for color matching and color theme creation. Creating a visualization with Pantone colors adds a dynamic quality to your visualizations, making novice users of color appear as experts. It also increases the chances of successful high-end printed output if the specified Pantone inks are used. Using Pantone Connect to Create a Color Theme Selecting the PICK option in the Pantone Connect mobile app. Here, I will work with the Pantone Connect Mobile app to build a color theme for an Area Chart Visualization that will be discussed later in this writing. The first step is to select the PICK option and to scroll through the colors to select a key or base color. Pantone Connect automatically defaults to the Formula Guide Coated book of colors. Selecting the Pantone 2029 C color in Pantone Connect. I chose Pantone 2029C and saved it into my color palette. These results are shown on the left. After selecting a Key color, Pantone 2029C in my case, Pantone Connect provides options for gaining more details about the color. By selecting Full Screen, an image of the Pantone 2029C appears on your phone with various notations such as the sRGB and the Hex codes. This result is also shown on the left. These notations will be important later when transferring this color data to a visualization tool. Selecting the HARMONIES option in the Pantone Connect mobile app. Selecting Harmonies results in a visual listing of Analogous, Complementary, Monochromatic, Split Complementary, Triadic, and Tetradic color harmonies with specified Pantone colors from the given Pantone color library. Let’s briefly examine in further detail the concept of color harmonies. What are Color Harmonies? Color Harmony is the process of choosing colors that work well together in the composition of an image. Similar to concepts in music, these harmonies are based around color cords on the Color Wheel that help to provide common guidelines for how color hues will work together. Color Wheels are tools that depict color relationships by organizing colors in a circle to visualize how the hues relate to each other. Below, I show the combined Red Green Blue and Cyan Magenta Yellow color wheel on which Pantone color harmonies are based. The Red-Green-Blue Color Wheel is based on the concept that Red, Green and Blue (RGB) are the color primaries for viewing displays like what we see on our desktop and mobile devices. The RGB color model is defined as an additive color model in which the combination of Red, Green, and Blue lights produces White light.There is also a Cyan, Magenta and Yellow color model for printing that is called a subtractive model since the inks combine to produces Black. Interestingly, the RGB color model and the CMY color model have a complementary relationship. Red is the complement of Cyan, Green is the complement of Magenta, Blue is the complement of Yellow and White is the complement of Black in their respective color spaces. The same color wheel thus applies for both RGB and CMY color spaces. Interestingly, the Pantone Connect app is showing us Pantone colors in RGB display mode while in practice the CMY color model is used in the application of the Pantone inks for color reproduction. This complementary relationship is shown above, with the RGB/CMY color wheel. Pantone Color Harmonies Returning to the results from the Pantone Connect app, I select the Harmonies option for my Key color of Pantone 2029C. As noted previously, this results in a visual display of the Analogous, Complementary, Monochromatic, Split Complementary, Triadic, and Tetradic color harmonies. Unfortunately, Pantone Connect does not currently provide a color wheel as a reference to help visualize each of these color harmony results. It is anticipated that future updates to the app will address this. Pantone Color Harmonies for the Pantone 2029C color. Above, the Pantone Color Harmonies for Pantone 2029C are shown. As a guide to you, I have added my own annotations, based on the RGB/CMY color wheel, to visually define each relationship. It is important to note here that all colors are determined from colors specified in the Pantone Formula Guide Coated. The results could be different if another Pantone color guide is selected or if another color selection tool is used. Let’s take a closer look at each of these Pantone Color Harmonies. The Analogous Color Harmony for Pantone 2029C. An Analogous harmony refers to three or more colors adjacent to each other on the Color Wheel. The Complementary Color Harmony for Pantone 2029C. A Complementary harmony indicates two colors that oppose each other on the Color Wheel. The Monochromatic Color Harmony for Pantone 2029C. A Monochromatic harmony is created with one hue with various tints, tones and shades of that hue to create the color theme. The harmony is noted by the individual colors lining up in straight from the center of the Color Wheel to the key hue. A tint is defined as a hue mixed with White while a tone is a designated as a hue mixed with Gray and a shade is created when a hue is mixed with Black. The Split Complementary Color Harmony for Pantone 2029C. The Split Complementary harmony combines a Key color with the two colors directly on either side of the Complementary Color. The Triadic Color Harmony for Pantone 2029C. The Triad or Triadic Color Harmony is based on three colors that are equally spaced on the Color Wheel. The Tetradic Color Harmony for Pantone 2029C. When four colors form a rectangular shape on the Color Wheel a Tetrad or Tetradic harmony results. A Tetrad harmony is also called a Double Complementary harmony since two Complementary Color Harmonies form the Tetrad. Creating a Color Theme in Pantone Connect My future Area Chart visualization has four data elements and I want each element to appear independent of the others. If I chose a Monochromatic or Analogous harmony, the colors might appear to have a sequential relationship with each other. If I chose a Complementary or Split-Complementary harmony, the data elements might appear to have a diverging relationship. With a Triadic color theme, I will need to find a fourth color. So, I select the Tetradic color harmony to convey the concept that the data elements are distinct from each other. I then save the specific Pantone colors in my New Palette Workspace. By opening a Pantone Connect account, I can name and save the color theme for future use. I name and save it as “Pastel”. This process is shown below. Saving the Tetradic Color Harmony for Pantone 2029C and naming it as “Pastel” in Pantone Connect. Brushing over each of the color swatches in the Pastel theme, it is possible to obtain the web Hex codes for each Pantone Color. These results are shown below. Using Pantone Connect to locate the web Hex codes for each of the Pantone colors in the Pastel theme. Creating an Area Chart Visualization with the Pastel Pantone Theme Now that I know the web Hex code for each Pantone color in my Pastel theme, the next step is to apply the color theme to a data visualization. Here, I will create an Area Chart example with the Apple Numbers app. These results are shown below.
https://medium.com/nightingale/connecting-pantone-with-data-viz-65e725a96206
['Theresa-Marie Rhyne']
2020-09-18 12:56:01.231000+00:00
['Visual Design', 'Data Science', 'Color Theory', 'Data Visualization', 'Design']
Your Stories, Heard and Shared
Your Stories, Heard and Shared Now is not the time for silence. Photo by KEREM YUCEL/AFP via Getty Images We at Medium want to acknowledge the pain and trauma that people across the United States are feeling right now due to acts of racist violence that have unfolded recently in Georgia, in Minneapolis, in Louisville, in New York City, and beyond. What follows are the major events that precipitated the current outrage and unrest. Ahmaud Arbery, a 25-year-old Black man, was jogging this February in a Georgia neighborhood where he was hunted down and killed by a father and son, who are White. The two men claimed they thought Arbery was a burglary suspect, and they were not arrested until the uproar that followed the release of a video of the brutal killing. In Minneapolis, a White police officer killed a Black man in custody — George Floyd — by kneeling on his neck as he begged for relief, lost consciousness, and died. A bystander captured the murder on video. Protests have ensued in Minneapolis and many other cities for days now. The police response across the United States has been violent, as seen in videos of protesters being billy-clubbed, shoved, driven into, hit with rubber bullets, and pepper-sprayed. Journalists have been arrested on the job. In Louisville, where protests broke out Thursday night, a Black woman — Breonna Taylor — was shot by police in March as they raided her home during an investigation that reports indicate was a case of wrong-person, wrong-place. The 9–1–1 recording is harrowing. In New York City, a woman was walking her dog off leash in a leash-only section of Central Park when a bird-watcher in the area asked her to leash her pet. The birder — Christian Cooper — is Black. The woman is White. She reacted by threatening, on video, to call the police and falsely claim that an “African-American man” was threatening her life. Click that link ← to hear from Cooper himself. Recorded incidents of harassment and discrimination against Asians and Asian-Americans have also increased during the coronavirus pandemic. We have been reading about it for months now. We could go on. These acts are tragic, traumatizing, and infuriating. They provoke emotion and outrage, which we all feel. The pandemic, which has disproportionately infected and killed people of color, has revealed the inequities that have always been present in the U.S. The systemic racism has been thrown into sharp relief. Being heard, being seen, is harder than ever. Many of us are quarantined. This takes a unique toll on people who are left out of the picture and the conversation far too often even in the Before Times — namely, people of color. On Saturday, in between caring for my toddler and catching up on housework and work-work and checking in with my loved ones, I was able to dig into what people are publishing on our platform. I found story after story, dispatch after dispatch, of people taking to Medium to share their thoughts, feelings, predictions, and experiences. This is how I spend most of my Saturdays, but this one felt different. There were even more of you, and an even more diverse set of voices, from across the country. There is a lot to say, right now, and we invite you to say it here, to read it here, to absorb it here. Here are some of the powerful stories we’re reading: Tamika Butler is here. Jada Gomez shared her experience as a Black and Latinx journalist witnessing her fellows in trade get arrested while doing their job. Journalist and beautiful writer Shenequa Golding is here, speaking on what it’s like to try to maintain so-called professionalism when you’re mourning inside. The always excellent William Spivey is here with some historical context. He also has thoughts about living while black. Hanif Abdurraqib always changes and improves how I think about things. Here he is on America, now. Aliya S. King salutes Black men. This writer takes on the erasure of Arbery’s humanity. Andre Henry asks a powerful question, and envisions a better world. A Minnesota local wants to talk to her kids about racist violence. The former mayor of a city he loves — that would be Minneapolis — is mourning. Tim Wise is here. His take: Violence is part of our shared history. This piece is a wake-up call. This one, a reality check. This one sums it up. And Tirhakah Love asks: Can we cool it with the racism? On that note, here’s a helpful list of tactical actions that support racial justice. Along with a reading list. If you see other stories we’ve missed and that you think should get more attention, please post them in the responses. As we make our way through these times, let’s try to find strength in each other. All forms of civil discourse are welcome here. We extend love to our colleagues, our readers, and our writers.
https://blog.medium.com/your-stories-heard-and-shared-4c1298c1eb10
["Siobhan O'Connor"]
2020-05-31 16:51:43.797000+00:00
['Equality', 'Society', 'Race', 'Racial Justice']
Healthcare Business Intelligence
Technology is changing the way we live our lives almost every day and in a multitude of different ways. One of these transformations is occurring in the field of healthcare. Health is a business that has been around for centuries with modern medicine helping to extend the average lifespan by decades, however, new innovations are set to make this whole process significantly more user friendly and useful. Business intelligence itself is a fairly new innovation and refers to the collection and use of data to improve business operations and strategic planning. Healthcare business intelligence builds on this same framework but in this case, the data in question is patient data gathered through a variety of channels. Healthcare BI has a slightly different purpose than that explored earlier with business intelligence alone. With healthcare business intelligence, organizations are still looking for ways of improving operations and costs, but a greater primary focus is the goal of improving patient care. Healthcare Analytics as a Business While patient care is a primary mandate for healthcare BI tools and software, businesses entering the market have the potential of realizing a very healthy return on their investment. In 2019 the market was already fairly robust at US$14 billion, but this is set to skyrocket over the coming years with a healthcare analytics market expected to reach US$50.5 billion by 2024. This investment is expected to primarily focus on North America with Europe a distant second followed quite closely by Asia. Over the coming years specifically, North America by itself will far surpass the current global investment of US$14 billion. This growth is fueled by a variety of factors, one of which is a growing focus from the government towards a more personalized provision of medical care. Benefits of BI in Healthcare While there are many reasons to embrace healthcare BI there are also some clearly obvious benefits that need to be called out. Reduced Costs In many parts of the world, including North America, healthcare is a business. While doctors and clinicians got into the role to help people, money is still a driver that needs to be acknowledged. Running a medical practice or hospital is expensive with resource costs, tools, equipment, and pharmaceuticals all adding up. However, clinical business intelligence tools can help drive these costs down in a variety of different ways. Healthcare BI software can track populations and perform analysis to better understand the likelihood of illness and infection in specific areas and locations. Healthcare BI tools can improve communication and information sharing between different organizations and even between countries. Turning a Doctor into a Data Scientist BI tools can be complicated and complex to use and understand. However as healthcare itself has transformed, so have the BI tools that support healthcare. Now doctors and other healthcare experts have a means of extracting information in a simple manner, without requiring an understanding of coding or databases. Self-service tools make front line staff more efficient and effective. They let healthcare providers access information in real-time to improve their ability to make decisions and judgments in a more timely manner. In addition, these self-service tools allow simple customization so that patients too can understand the information being presented. Personalized Treatment Services In years gone by, patient treatment was a matter of best guess more than anything. As time progressed and information was shared between physicians, researchers, and clinicians about what worked and did not work when it came to treatments for specific illnesses and disease, better treatment options were discovered and refined. Health data intelligence helps take that a step further and helps doctors understand why a treatment that worked for one patient might or might not be suitable for another. Business analytics in healthcare can be further refined to demonstrate the risks of specific treatments based on a patient’s current condition and medication. Now treatments can be personalized based on specific genetic blueprints targeting treatments in a more concrete manner. Evaluating Caregivers Healthcare is a business as already mentioned and one of the precepts of business is the service provided to customers. Within healthcare, those customers are the patients that engage with the doctor or medical facility. These patients are concerned not only with how they are treated while in the facility, but the information they receive, how much empathy is or is not shown in the given situation and more. Like reviews for restaurants, healthcare providers too can be reviewed by patients and this information gathered through different tools. Clinical business intelligence software can evaluate information on the carers within their organization and use this information to further improve the services they provide to patients. Improving Patient Satisfaction Health BI has multiple impacts on patient satisfaction. Better and more customized treatment ensures that patients receive targeted services focused on their specific illnesses or condition. Customized treatment options drive improved patient outcomes, leading to overall better quality of life. In addition, clinical and hospital business intelligence helps make the facilities themselves more efficient and effective improving wait times and overall service levels. Healthcare BI Tools Healthcare BI software is a subset of BI software targeted towards the healthcare market. These tools provide specialists in the medical field with an improved way of reviewing data gathered from different sources. These sources could include patient files and medical records but can be expanded to include additional information such as financial records and more, to better enable the facility in their care and treatment planning. Healthcare BI tools integrate with other software in a medical establishment but it is crucial to understand that it is not the same as software like EMR and EHR. Tableau One of the leaders in the BI marketplace, Tableau helps organizations create and publish dashboards extremely easily. Tableau has some inbuilt data preparation tools that simplify the process of gaining information. Tableau also has some prepared templates for users in the healthcare market which helps even further with implementation letting organizations quickly drill down into their information. Power BI Power BI is a Microsoft product and as such is very familiar to users of the Office solution. It integrates directly with other Microsoft products also like Excel and SharePoint and lets users analyze, model, and graphically represent data in a variety of different dashboards and reports. Power BI is fairly intuitive and easy to use with a built-in AI engine that lets users analyze clinical data quickly and easily. Sisense Sisense like Tableau has dedicated integrations for the healthcare market. However, Sisense takes it perhaps a step further with a healthcare analytics module built specifically for healthcare information and data. Sisense lets you pipe data in from a variety of different data sources so you integrate all of the different touchpoints in a single interactive dashboard. NIX Experience In Healthcare BI As a leader in software development, NIX was contracted to build a solution for a global organization. This company was looking for a way of improving the information available to company executives. Executives were interested in the visualization of specific indicators related to finance, quality of care, and clinical services. The NIX team used data from multiple different applications to determine the key areas that needed to be measured. They determined that the best path forward was the use of Tableau as a solution. Tableau was visually appropriate and integrated with all of the systems but also provided the security that the organization needed in terms of patient information. NIX worked with Tableau extensively and also implemented a separate Java-based component to further improve security and authentication. In addition, another component was added which improved the scheduling of data extracts. The NIX team successfully built a solution in a very short timeframe that met all of the client requirements, leading to a successful product launch shortly thereafter. If you are interested in healthcare business intelligence and are looking for a partner with experience for your project, contact us. At NIX we understand the business of software and healthcare and can help you ensure that you are a success at both.
https://medium.com/nix-united/healthcare-business-intelligence-c28bd7cfb5e7
[]
2020-11-13 13:19:21.324000+00:00
['Healthcare Technology', 'Software Development', 'Healthcare', 'Nix', 'Business Intelligence']
Top 30 data science interview questions.
Data science, also known as data-driven decision, is an interdisciplinary field about scientific methods, process and systems to extract knowledge from data in various forms, and take decision based on this knowledge. There are a lot of things that a data scientist should know, I will give you a list of data science interview questions that i faced during several interviews, if you are a aspiring data scientist then you can start from here, if you have been for a while in this field then it might be repetition for you, but you will get a lot of things from here. I will try to start from very basic interview questions and cover advance ones later, So let’s get started. 1. What is the difference between supervised and unsupervised machine learning? Supervised Machine learning: Supervised machine learning requires training labelled data. Let’s discuss it in bit detail, when we have Unsupervised Machine learning: Unsupervised machine learning doesn’t required labelled data. 2. What is bias, variance trade off ? Bias: “Bias is error introduced in your model due to over simplification of machine learning algorithm.” It can lead to under fitting. When you train your model at that time model makes simplified assumptions to make the target function easier to understand. Low bias machine learning algorithms — Decision Trees, k-NN and SVM High bias machine learning algorithms — Linear Regression, Logistic Regression Variance: “Variance is error introduced in your model due to complex machine learning algorithm, your model learns noise also from the training data set and performs bad on test data set.” It can lead high sensitivity and over fitting. Normally, as you increase the complexity of your model, you will see a reduction in error due to lower bias in the model. However, this only happens till a particular point. As you continue to make your model more complex, you end up over-fitting your model and hence your model will start suffering from high variance. Bias, Variance trade off: The goal of any supervised machine learning algorithm is to have low bias and low variance to achieve good prediction performance. The k-nearest neighbours algorithm has low bias and high variance, but the trade-off can be changed by increasing the value of k which increases the number of neighbours that contribute to the prediction and in turn increases the bias of the model. The support vector machine algorithm has low bias and high variance, but the trade-off can be changed by increasing the C parameter that influences the number of violations of the margin allowed in the training data which increases the bias but decreases the variance. There is no escaping the relationship between bias and variance in machine learning. Increasing the bias will decrease the variance. Increasing the variance will decrease the bias. 3. What is exploding gradients ? Gradient: Gradient is the direction and magnitude calculated during training of a neural network that is used to update the network weights in the right direction and by the right amount. “Exploding gradients are a problem where large error gradients accumulate and result in very large updates to neural network model weights during training.” At an extreme, the values of weights can become so large as to overflow and result in NaN values. This has the effect of your model being unstable and unable to learn from your training data. Now let’s understand what is the gradient. 4. What is a confusion matrix ? The confusion matrix is a 2X2 table that contains 4 outputs provided by the binary classifier. Various measures, such as error-rate, accuracy, specificity, sensitivity, precision and recall are derived from it. Confusion Matrix A data set used for performance evaluation is called test data set. It should contain the correct labels and predicted labels. The predicted labels will exactly the same if the performance of a binary classifier is perfect. The predicted labels usually match with part of the observed labels in real world scenarios. A binary classifier predicts all data instances of a test dataset as either positive or negative. This produces four outcomes- True positive(TP) — Correct positive prediction False positive(FP) — Incorrect positive prediction True negative(TN) — Correct negative prediction False negative(FN) — Incorrect negative prediction Basic measures derived from the confusion matrix Error Rate = (FP+FN)/(P+N) Accuracy = (TP+TN)/(P+N) Sensitivity(Recall or True positive rate) = TP/P Specificity(True negative rate) = TN/N Precision(Positive predicted value) = TP/(TP+FP) F-Score(Harmonic mean of precision and recall) = (1+b)(PREC.REC)/(b²PREC+REC) where b is commonly 0.5, 1, 2. 6. Explain how a ROC curve works ? The ROC curve is a graphical representation of the contrast between true positive rates and false positive rates at various thresholds. It is often used as a proxy for the trade-off between the sensitivity(true positive rate) and false positive rate. 7. What is selection Bias ? Selection bias occurs when sample obtained is not representative of the population intended to be analysed. 8. Explain SVM machine learning algorithm in detail. SVM stands for support vector machine, it is a supervised machine learning algorithm which can be used for both Regression and Classification. If you have n features in your training data set, SVM tries to plot it in n-dimensional space with the value of each feature being the value of a particular coordinate. SVM uses hyper planes to separate out different classes based on the provided kernel function. 9. What are support vectors in SVM. In the above diagram we see that the thinner lines mark the distance from the classifier to the closest data points called the support vectors (darkened data points). The distance between the two thin lines is called the margin. 10. What are the different kernels functions in SVM ? There are four types of kernels in SVM. Linear Kernel Polynomial kernel Radial basis kernel Sigmoid kernel 11. Explain Decision Tree algorithm in detail. Decision tree is a supervised machine learning algorithm mainly used for the Regression and Classification.It breaks down a data set into smaller and smaller subsets while at the same time an associated decision tree is incrementally developed. The final result is a tree with decision nodes and leaf nodes. Decision tree can handle both categorical and numerical data. 12. What is Entropy and Information gain in Decision tree algorithm ? The core algorithm for building decision tree is called ID3. ID3 uses Enteropy and Information Gain to construct a decision tree. Entropy A decision tree is built top-down from a root node and involve partitioning of data into homogenious subsets. ID3 uses enteropy to check the homogeneity of a sample. If the sample is completely homogenious then entropy is zero and if the sample is an equally divided it has entropy of one. Information Gain The Information Gain is based on the decrease in entropy after a dataset is split on an attribute. Constructing a decision tree is all about finding attributes that returns the highest information gain. 13. What is pruning in Decision Tree ? When we remove sub-nodes of a decision node, this process is called pruning or opposite process of splitting. 14. What is Ensemble Learning ? Ensemble is the art of combining diverse set of learners(Individual models) together to improvise on the stability and predictive power of the model. Ensemble learning has many types but two more popular ensemble learning techniques are mentioned below. Bagging Bagging tries to implement similar learners on small sample populations and then takes a mean of all the predictions. In generalised bagging, you can use different learners on different population. As you expect this helps us to reduce the variance error. Boosting Boosting is an iterative technique which adjust the weight of an observation based on the last classification. If an observation was classified incorrectly, it tries to increase the weight of this observation and vice versa. Boosting in general decreases the bias error and builds strong predictive models. However, they may over fit on the training data. 15. What is Random Forest? How does it work ? Random forest is a versatile machine learning method capable of performing both regression and classification tasks. It is also used for dimentionality reduction, treats missing values, outlier values. It is a type of ensemble learning method, where a group of weak models combine to form a powerful model. In Random Forest, we grow multiple trees as opposed to a single tree. To classify a new object based on attributes, each tree gives a classification. The forest chooses the classification having the most votes(Over all the trees in the forest) and in case of regression, it takes the average of outputs by different trees. 16. What cross-validation technique would you use on a time series data set. Instead of using k-fold cross-validation, you should be aware to the fact that a time series is not randomly distributed data — It is inherently ordered by chronological order. In case of time series data, you should use techniques like forward chaining — Where you will be model on past data then look at forward-facing data. fold 1: training[1], test[2] fold 1: training[1 2], test[3] fold 1: training[1 2 3], test[4] fold 1: training[1 2 3 4], test[5] 17. What is logistic regression? Or State an example when you have used logistic regression recently. Logistic Regression often referred as logit model is a technique to predict the binary outcome from a linear combination of predictor variables. For example, if you want to predict whether a particular political leader will win the election or not. In this case, the outcome of prediction is binary i.e. 0 or 1 (Win/Lose). The predictor variables here would be the amount of money spent for election campaigning of a particular candidate, the amount of time spent in campaigning, etc. 18. What do you understand by the term Normal Distribution? Data is usually distributed in different ways with a bias to the left or to the right or it can all be jumbled up. However, there are chances that data is distributed around a central value without any bias to the left or right and reaches normal distribution in the form of a bell shaped curve. The random variables are distributed in the form of an symmetrical bell shaped curve. 19. What is a Box Cox Transformation? Dependent variable for a regression analysis might not satisfy one or more assumptions of an ordinary least squares regression. The residuals could either curve as the prediction increases or follow skewed distribution. In such scenarios, it is necessary to transform the response variable so that the data meets the required assumptions. A Box cox transformation is a statistical technique to transform non-normal dependent variables into a normal shape. If the given data is not normal then most of the statistical techniques assume normality. Applying a box cox transformation means that you can run a broader number of tests. A Box Cox transformation is a way to transform non-normal dependent variables into a normal shape. Normality is an important assumption for many statistical techniques, if your data isn’t normal, applying a Box-Cox means that you are able to run a broader number of tests. The Box Cox transformation is named after statisticians George Box and Sir David Roxbee Cox who collaborated on a 1964 paper and developed the technique. 20. How will you define the number of clusters in a clustering algorithm? Though the Clustering Algorithm is not specified, this question will mostly be asked in reference to K-Means clustering where “K” defines the number of clusters. For example, the following image shows three different groups. Within Sum of squares is generally used to explain the homogeneity within a cluster. If you plot WSS for a range of number of clusters, you will get the plot shown below. The Graph is generally known as Elbow Curve. Red circled point in above graph i.e. Number of Cluster =6 is the point after which you don’t see any decrement in WSS. This point is known as bending point and taken as K in K — Means.This is the widely used approach but few data scientists also use Hierarchical clustering first to create dendograms and identify the distinct groups from there. 21. What is deep learning? Deep learning is sub field of machine learning inspired by structure and function of brain called artificial neural network. We have a lot numbers of algorithms under machine learning like Linear regression, SVM, Neural network etc and deep learning is just an extension of Neural networks. In neural nets we consider small number of hidden layers but when it comes to deep learning algorithms we consider a huge number of hidden layers to better understand the input output relationship. 22. What are Recurrent Neural Networks(RNNs) ? Recurrent nets are type of artificial neural networks designed to recognise pattern from the sequence of data such as Time series, stock market and government agencies etc. To understand recurrent nets, first you have to understand the basics of feed forward nets. Both these networks RNN and feed forward named after the way they channel information through a series of mathematical orations performed at the nodes of the network. One feeds information through straight(never touching same node twice), while the other cycles it through loop, and the latter are called recurrent. Recurrent networks on the other hand, take as their input not just the current input example they see, but also the what they have perceived previously in time. The BTSXPE at the bottom of the drawing represents the input example in the current moment, and CONTEXT UNIT represents the output of the previous moment. The decision a recurrent neural network reached at time t-1 affects the decision that it will reach one moment later at time t. So recurrent networks have two sources of input, the present and the recent past, which combine to determine how they respond to new data, much as we do in life. The error they generate will return via back propagation and be used to adjust their weights until error can’t go any lower. Remember, the purpose of recurrent nets is to accurately classify sequential input. We rely on the back propagation of error and gradient descent to do so. Back propagation in feed forward networks moves backward from the final error through the outputs, weights and inputs of each hidden layer, assigning those weights responsibility for a portion of the error by calculating their partial derivatives — ∂E/∂w, or the relationship between their rates of change. Those derivatives are then used by our learning rule, gradient descent, to adjust the weights up or down, whichever direction decreases error. Recurrent networks rely on an extension of back propagation called back propagation through time, or BPTT. Time, in this case, is simply expressed by a well-defined, ordered series of calculations linking one time step to the next, which is all back propagation needs to work. 23. What is the difference between machine learning and deep learning? Machine learning: Machine learning is a field of computer science that gives computers the ability to learn without being explicitly programmed. Machine learning can be categorised in following three categories. Supervised machine learning, Unsupervised machine learning, Reinforcement learning Deep learning: Deep Learning is a sub field of machine learning concerned with algorithms inspired by the structure and function of the brain called artificial neural networks. 24. What is reinforcement learning ? Reinforcement learning Reinforcement Learning is learning what to do and how to map situations to actions. The end result is to maximise the numerical reward signal. The learner is not told which action to take, but instead must discover which action will yield the maximum reward.Reinforcement learning is inspired by the learning of human beings, it is based on the reward/panelity mechanism. 25. What is selection bias ? Selection bias is the bias introduced by the selection of individuals, groups or data for analysis in such a way that proper randomisation is not achieved, thereby ensuring that the sample obtained is not representative of the population intended to be analysed. It is sometimes referred to as the selection effect. The phrase “selection bias” most often refers to the distortion of a statistical analysis, resulting from the method of collecting samples. If the selection bias is not taken into account, then some conclusions of the study may not be accurate. 26. Explain what regularisation is and why it is useful. Regularisation is the process of adding tunning parameter to a model to induce smoothness in order to prevent overfitting. This is most often done by adding a constant multiple to an existing weight vector. This constant is often the L1(Lasso) or L2(ridge). The model predictions should then minimize the loss function calculated on the regularized training set. 27. What is TF/IDF vectorization ? tf–idf is short for term frequency–inverse document frequency, is a numerical statistic that is intended to reflect how important a word is to a document in a collection or corpus. It is often used as a weighting factor in information retrieval and text mining. The tf-idf value increases proportionally to the number of times a word appears in the document, but is offset by the frequency of the word in the corpus, which helps to adjust for the fact that some words appear more frequently in general. 28. What are Recommender Systems? A subclass of information filtering systems that are meant to predict the preferences or ratings that a user would give to a product. Recommender systems are widely used in movies, news, research articles, products, social tags, music, etc. 29. What is the difference between Regression and classification ML techniques. Both Regression and classification machine learning techniques come under Supervised machine learning algorithms. In Supervised machine learning algorithm, we have to train the model using labelled data set, While training we have to explicitly provide the correct labels and algorithm tries to learn the pattern from input to output. If our labels are discrete values then it will a classification problem, e.g A,B etc. but if our labels are continuous values then it will be a regression problem, e.g 1.23, 1.333 etc. 30. If you are having 4GB RAM in your machine and you want to train your model on 10GB data set. How would you go about this problem. Have you ever faced this kind of problem in your machine learning/data science experience so far ? First of all you have to ask which ML model you want to train. For Neural networks: Batch size with Numpy array will work. Steps: Load the whole data in Numpy array. Numpy array has property to create mapping of complete data set, it doesn’t load complete data set in memory. You can pass index to Numpy array to get required data. Use this data to pass to Neural network. Have small batch size. For SVM: Partial fit will work Steps: Divide one big data set in small size data sets. Use partial fit method of SVM, it requires subset of complete data set. Repeat step 2 for other subsets. 31. What is p-value? When you perform a hypothesis test in statistics, a p-value can help you determine the strength of your results. p-value is a number between 0 and 1. Based on the value it will denote the strength of the results. The claim which is on trial is called Null Hypothesis. Low p-value (≤ 0.05) indicates strength against the null hypothesis which means we can reject the null Hypothesis. High p-value (≥ 0.05) indicates strength for the null hypothesis which means we can accept the null Hypothesis p-value of 0.05 indicates the Hypothesis could go either way. To put it in another way, High P values: your data are likely with a true null. Low P values: your data are unlikely with a true null. 32. What is ‘Naive’ in a Naive Bayes ? The Naive Bayes Algorithm is based on the Bayes Theorem. Bayes’ theorem describes the probability of an event, based on prior knowledge of conditions that might be related to the event. What is Naive ? The Algorithm is ‘naive’ because it makes assumptions that may or may not turn out to be correct. 33. Why we generally use Softmax non-linearity function as last operation in network ? It is because it takes in a vector of real numbers and returns a probability distribution. Its definition is as follows. Let x be a vector of real numbers (positive, negative, whatever, there are no constraints). Then the i’th component of Softmax(x) is — It should be clear that the output is a probability distribution: each element is non-negative and the sum over all components is 1. 34. What are different ranking algorithms? Traditional ML algorithms solve a prediction problem (classification or regression) on a single instance at a time. E.g. if you are doing spam detection on email, you will look at all the features associated with that email and classify it as spam or not. The aim of traditional ML is to come up with a class (spam or no-spam) or a single numerical score for that instance. Ranking algorithms like LTR solves a ranking problem on a list of items. The aim of LTR is to come up with optimal ordering of those items. As such, LTR doesn’t care much about the exact score that each item gets, but cares more about the relative ordering among all the items. RankNet, LambdaRank and LambdaMART are all LTR algorithms developed by Chris Burges and his colleagues at Microsoft Research.
https://towardsdatascience.com/top-30-data-science-interview-questions-7dd9a96d3f5c
['Nitin Panwar']
2020-01-23 05:46:54.585000+00:00
['Machine Learning', 'Neural Networks', 'Deep Learning', 'AI', 'Data Science']
From ‘Fat Amy’ to ‘Fit Amy’
From ‘Fat Amy’ to ‘Fit Amy’ Once again, society is focusing on all the wrong things about Rebel Wilson’s weight-loss journey Photo Credit to Daily Mail UK Growing up in a family that lived on Roger and Hammerstein, I was super excited about a series of more current musical films I could share with my daughters. In the Pitch Perfect movies, actress Rebel Wilson plays a witty, sarcastic, don’t-give-a-f**k, loveable character — Fat Amy. My family owns all three of the Pitch Perfect movies and jam out to their soundtracks on a regular basis. Whenever we watch the films, my daughters like to pretend to be the characters, and every time we end up in the same argument. ‘I want to be Beca, you be Fat Amy.’ ‘No, I don’t want to be Fat Amy, I want to be Beca.’ ‘Ladies, it doesn’t matter who is who, just enjoy the movie. I’ll be Amy, and you both can be Beca… but what is wrong with being Amy?’ ‘Nothing. She’s so funny, but Beca is prettier because she looks more normal, not like Fat Amy.” I didn’t have to ask what looked ‘prettier’ or ‘more normal’ about Beca. We already know that society glorifies straight sized women, so why was it necessary to slap the fat label before Amy in the first place? When it comes to Rebel’s decision to make healthier choices and honor her body with movement, why is the main take away that she lost weight? Shouldn’t the focus be on living a healthier lifestyle? Most importantly, how can we rewrite the narrative by celebrating the commitment to making healthy choices, instead of the mere side effect that is losing weight.
https://medium.com/fearless-she-wrote/from-fat-amy-to-fit-amy-14a2e9dc38c6
['Estrella Ramirez']
2020-10-09 14:01:26.675000+00:00
['Weight Loss', 'Body Image', 'Society', 'Women', 'Lifestyle']
Caveats of using return with try/except in Python
User code can raise built-in exceptions. Python defines try/except to handle exceptions and proceed with the further execution of program without interruption. Let’s quickly get to an example of a basic try/except clause try/except statements Assuming the file is unavailable, executing the below code will give the output as shown below. try: f = open("testfile.txt") ... except FileNotFoundError as e: print(f" Error while reading file {e} ") Output: Error while reading file [Errno 2] No such file or directory: 'testfile.txt' In practical use cases such as connecting to a db or opening a file object, we may need to perform teardown operations such db closure/file closure irrespective of the block getting executed. So finally is one such block which can be reserved for these operations as it gets executed always. Let’s looks at an example. try/except/finally statements try: f = open("testfile.txt") except FileNotFoundError as e: print(f" Error while reading file {e} ") finally: print(" Closing the file ") f.close() So what could possibly go wrong here? Why should we be cautious? Well, one could easily put foot in their mouth when they use return statements with try/except/finally in Python. Let’s carefully take one step at a time to understand the usage of return statements during exception handling. 1. Usage of return with try/except def test_func(): try: x = 10 return x except Exception as e: x = 20 return x finally: x = 30 return x print(test_func()) Output: 30 If you think the output of the above code is 10, I am afraid you are wrong. It’s pretty normal to make that assumption because we tend to think that the moment there is a return statement in a function, then it returns(exits) from the function.Well, that may not be true in this case. From the docs, If a finally clause is present, the finally clause will execute as the last task before the try statement completes. The finally clause runs whether or not the try statement produces an exception. clause is present, the finally clause will execute as the last task before the statement completes. The clause runs whether or not the statement produces an exception. If the try statement reaches a break , continue or return statement, the finally clause will execute just prior to the break , continue or return statement’s execution. , or statement, the clause will execute just prior to the , or statement’s execution. If a finally clause includes a return statement, the returned value will be the one from the finally clause’s return statement, not the value from the try/except clause’s return statement. So as you guessed it right, the output of the above code will be 30. Now, what happens if an exception is raised in the aforementioned code. 2. Usage of return with exceptions def test_func(): try: x = 10 raise Exception except Exception as e: print(f" Raising exception ") x = 20 return x finally: x = 30 return x print(test_func()) Output: Raising exception 30 So, again the output value of x will be 30. We should remember the fact that a finally statement gets executed ALWAYS. To have a clearer idea of the execution flow, let’s add print statements in every block. def test_func(): try: x = 10 print(f" Inside try block ") return x except Exception as e: x = 20 print(f" Inside except block ") return x finally: x = 30 print(f" Inside finally block ") return x print(test_func()) Output: Inside try block Inside finally block 30 This would have given an idea on the execution flow.Now that we have a good understanding of how try/except/finally works with return statements, let’s try to squeeze in another clause. An else clause can be added along with try/except and the else clause will get executed if the try block does not raise an exception. 3. Usage of return with try/else/finally def test_func(): try: x = 10 print(f" Inside try block ") return x except Exception as e: x = 20 print(f" Inside except block ") return x else: print(f" Inside else block ") x = 40 return x finally: x = 30 print(f" Inside finally block ") return x print(test_func()) Output: Inside try block Inside finally block 30 So, why didn’t the else clause get executed here though the try block did not raise any exception. Note the return statement in the try block.The else block never got executed because the function returned even before the execution reached the else clause. Now remove the return statement in the try block and execute the above code again. def test_func(): try: x = 10 print(f" Inside try block ") except Exception as e: x = 20 print(f" Inside except block ") return x else: print(f" Inside else block ") x = 40 return x finally: x = 30 print(f" Inside finally block ") return x print(test_func()) Output: Inside try block Inside else block Inside finally block 30 Summary: Use extra caution when adding return in try/except/finally clauses The finally clause runs whether or not the try statement produces an exception. clause runs whether or not the statement produces an exception. If a finally clause includes a return statement, the returned value will be the one from the finally clause’s return statement clause includes a statement, the returned value will be the one from the clause’s statement An else clause will get executed if the try block does not raise an exception References: Python In Plain English Did you know that we have three publications and a YouTube channel? Find links to everything at plainenglish.io!
https://medium.com/python-in-plain-english/caveats-of-using-return-with-try-except-in-python-1f21cadbfa00
['Dinesh Kumar K B']
2020-09-11 15:30:24.099000+00:00
['Programming', 'Python Programming', 'Python3', 'Python', 'Software Development']
Hello World of Machine Learning using TensorFlow
Hello World of Machine Learning using TensorFlow Prajwolpkc Follow Jul 30 · 8 min read Let's learn the basic ‘Hello World’ in machine learning. We will use TensorFlow, an end-to-end open-source machine learning platform deployed on google cloud platform. The basics view of machine learning. Consider building applications in the traditional manner as represented in the following diagram: You express rules in a programming language. These act on data and your program provide answers. In the case of the activity detection, the rules (the code you wrote to define types of activities) acted upon the data (the person’s movement speed) in order to find an answer — the return value from the function for determining the activity status of the user (whether they were walking, running, biking, etc.). The process for detecting this activity status via machine learning is very similar — only the axes are different: Instead of trying to define the rules and express them in a programming language, you provide the answers (typically called labels) along with the data, and the machine then infers the rules that determine the relationship between the answers and the data. For example, an activity detection scenario might look like this in a machine learning context: You gather lots of data, and label it to effectively say “This is what walking looks like”, “This is what running looks like” etc. Then, the computer can infer the rules that determine, from the data, what the distinct patterns that denote a particular activity are. Beyond being an alternative method to programming this scenario, this also gives you the ability to open up new scenarios, such as the golfing one that may not have been possible under the rules-based traditional programming approach. In traditional programming your code compiles into a binary that is typically called a program. In machine learning, the item you create from the data and labels is called a model. Consider the result of this to be a model, which at runtime is used like this: You pass the model some data, and the model uses the rules it inferred from the training to come up with a prediction — i.e. “That data looks like walking”, “That data looks like biking” etc. Lets build a very simple ‘Hello World’ model which has most of the building blocks that can be used in any machine learning scenario. We will do the following. Create VM on google cloud platform and set up the Python development environment Create a script for a Machine Learned Model Train your Neural Network Test your model Goto google cloud platform console Set up VM to carry out the lab tasks. Select Navigation menu > Compute Engine, and then click VM instances. In the Create an instance dialog with a name mllearn, in the right pane, set the Machine type to n1-standard1. Leave all other fields at the default values. Click Create. You new VM is listed in the VM instance list, you see a green check your VM has been successfully created. Click SSH to the right of your new VM instance to connect to the console of the VM via SSH. Install the Python development environment on your system python3 --version pip3 --version virtualenv --version The output shows that Python 3.5.3 is already in the environment, but not the pip package manager, and Virtualenv. 2. Install the pip package manager, and Virtualenv: sudo apt update sudo apt install python3-pip -y sudo pip3 install -U virtualenv # system-wide install 3. Confirm the pip package manager and Virtualenv is installed: Create a virtual environment virtualenv --system-site-packages -p python3 ./venv 2. Activate the virtual environment: source ./venv/bin/activate (venv) student-02-f4b8fb7e3cd3@mllearn:~$ 3. Upgrade pip to install packages within a virtual environment without affecting the host system setup. pip install --upgrade pip 4. View the packages installed within the virtual environment Install the TensorFlow pip package pip install --upgrade tensorflow 2. Verify the install python -c "import warnings;warnings.simplefilter(action='ignore', category=FutureWarning);import tensorflow as tf;print(tf.reduce_sum(tf.random.normal([1000, 1000])))" We will see something like this at the end. Tensor("Sum:0", shape=(), dtype=float32) Now TensorFlow is now installed! Create your first machine learning models Consider the following sets of numbers. Can you see the relationship between them? As we look at them we might notice that the X value is increasing by 1 as you read left to right, and the corresponding Y value is increasing by 3. So you probably think Y=3X plus or minus something. Then we’d probably look at the zero on X and see that Y = 1, and we come up with the relationship Y=3X+1. That’s almost exactly how we would use code to train a model, known as a neural network, to spot the patterns between these items of data! Looking at the code used to do it, we use data to train the neural network! By feeding it with a set of Xs and a set of Ys, it should be able to figure out the relationships. Step through creating the script piece by piece. Create and open the file model.py . nano model.py Add the following code to imports: TensorFlow and call it tf for ease of use for ease of use A library called numpy , which helps represent the data as lists , which helps represent the data as lists keras , the framework for defining a neural network as a set of sequential layers , the framework for defining a neural network as a set of sequential layers Add code to suppress deprecation warnings to make the output easier to follow. import warnings warnings.simplefilter(action='ignore', category=FutureWarning) import tensorflow as tf import numpy as np from tensorflow import keras from tensorflow.python.util import deprecation deprecation._PRINT_DEPRECATION_WARNINGS = False Leave model.py open in nano for the next section. Define and compile the neural network Next, create the simplest possible neural network. It has 1 layer, and that layer has 1 neuron, and the input shape is 1 value. model = tf.keras.Sequential([keras.layers.Dense(units=1, input_shape=[1])]) Next, write the code to compile your neural network. When we do, we must specify 2 functions, a loss and an optimizer . If we’ve seen lots of math for machine learning, here’s where we would usually use it, but in this case, it’s nicely encapsulated in functions. Step through what’s happening: We know that in the function, the relationship between the numbers is y=3x+1 . . When the computer tries to ‘learn’ that, it makes a guess…maybe y=10x+10 . The loss function measures the guessed answers against the known correct answers and measures how well or how badly it did. . The function measures the guessed answers against the known correct answers and measures how well or how badly it did. Next, the model uses the optimizer function to make another guess. Based on the loss function’s result, it will try to minimize the loss. At this point maybe it will come up with something like y=5x+5 . While this is still pretty bad, it's closer to the correct result (i.e. the loss is lower). . While this is still pretty bad, it's closer to the correct result (i.e. the loss is lower). The model repeats this for the number of epochs you specify. But first, we add code to model.py to tell it to use mean squared error for the loss and stochastic gradient descent (sgd) for the optimizer. We don't need to understand the math for these yet, but we can see that they work Over time we will learn the different and appropriate loss and optimizer functions for different scenarios. 2. Add the following code to model.py : model.compile(optimizer='sgd', loss='mean_squared_error') Leave model.py open in nano for the next section. Providing the data As you can see, the relationship between these is that Y=3X+1, so where X = -1, Y=-2 etc. etc. A python library called numpy provides lots of array type data structures that are a defacto standard way of feeding in data. To use these, specify the values as an array in numpy using np.array\[\] . Add the following code to model.py : xs = np.array([-1.0, 0.0, 1.0, 2.0, 3.0, 4.0], dtype=float) ys = np.array([-2.0, 1.0, 4.0, 7.0, 10.0, 13.0], dtype=float) model.py , contains all of the code needed to define the neural network. Now lets add code to train the neural network to infer the patterns between these numbers and use those to create a model. Training the neural network To train the neural network to ‘learn’ the relationship between the Xs and Ys, we step it through a loop: make a guess, measure how good or bad it is (aka the loss), use the optimizer to make another guess, etc. It will do it for the number of epochs you specify, say 500. model.fit(xs, ys, epochs=500) Final model.py should be like this import warnings warnings.simplefilter(action='ignore', category=FutureWarning) import tensorflow as tf import numpy as np from tensorflow import keras from tensorflow.python.util import deprecation deprecation._PRINT_DEPRECATION_WARNINGS = False model = tf.keras.Sequential([keras.layers.Dense(units=1, input_shape=[1])]) model.compile(optimizer='sgd', loss='mean_squared_error') xs = np.array([-1.0, 0.0, 1.0, 2.0, 3.0, 4.0], dtype=float) ys = np.array([-2.0, 1.0, 4.0, 7.0, 10.0, 13.0], dtype=float) model.fit(xs, ys, epochs=500) Press Ctrl+x to close nano, then press Y to save the model.py script, then Enter to confirm the script name. Run your script python model.py Look at the output, which may be slightly different. Notice that the script prints out the loss for each epoch. Scroll through the epochs, the loss value is quite large for the first few epochs, but gets smaller with each step. For example: And by the time the training is done, the loss is extremely small, showing that our model is doing a great job of inferring the relationship between the numbers: Using the model We now have a model that has been trained to learn the relationship between X and Y. We can use the model.predict method to figure out the Y for a previously unknown X. So, for example, if X = 10, what do you think Y will be? nano model.py 2. Add the following code to the end of the script: print(model.predict([10.0])) 3. Press Ctrl+x, Y, then Enter to save and close model.py . 4. Take a guess about the Y value, then run your script: python model.py The Y value is listed after the epochs. We might have thought Y=31, right? But in the example output above, it ended up being a little over (31.005917). Why? Neural networks deal with probabilities, so given the data that we fed in to it, the neural network calculated a very high probability that the relationship between X and Y is Y=3X+1, but with only 6 data points we can’t know for sure. As a result, the result for 10 is very close to 31, but not necessarily 31. As you work with neural networks, we will see this pattern recurring. You will almost always deal with probabilities, not certainties, and will do a little bit of coding to figure out what the result is based on the probabilities, particularly when it comes to classification. This concludes the session for Hello World of Machine Learning.
https://medium.com/analytics-vidhya/hello-world-of-machine-learning-using-tensorflow-fee1318776fe
[]
2020-12-16 17:26:36.794000+00:00
['Google Cloud Platform', 'Neural Networks', 'Python', 'Machine Learning', 'TensorFlow']
Why Enigma’s “Sadeness” Could Never Exist Today
by Erin Sullivan Part Gregorian chant, part Native American soundscape, part Pure Moods, this song was a disaster in theory and execution. Still, it topped the charts in December of 1991 and I’m pretty sure I loved it. I was shocked to find out the actual date, though, because if you’ll recall that song continued its run for the better part of that decade. I guess because it was timeless. Literally without a time or genre. It’s true that children determine what gets played on the radio, but is this what we really wanted? Who was this song for? It was technically a Christian song, the opening line in Latin translating to “in the name of Christ, Amen,” so I guess it was for youth groups, but who else? Well, it was for losers. For kids in t-shirts that highlighted the plight of sea turtles and their near extinction. That kid was all of us. That kid was me. We lacked the self awareness to know how hard we were being played. But kids are cool now. Attribute it to the Internet, technology, a constant threat of danger, whatever — Enigma would be ripped to shreds today, kids texting their YouTube comments faster than you could Skip-It. “This soung sounds lykr my butt.” I don’t know, I did Skip-It. Oh sure, all of these kids will be terrible in 10 years, a lifetime of confidence and immediacy under their belts, but damn if they wouldn’t .gif some monks into submission. Ten Decembers after Enigma’s chart topper, Ashanti’s “Always On Time” was radio’s most popular jam. Can’t be mad at us there. And while we’re five months away from that next decade marker, if my local radio station is any indicator, Jason Derulo will not only have become reigning December prince, he will have become President of the United States of America. Primaries will be canceled, Obama will immediately step down. (As a side note, Jason Derulo, if you’re reading this, I get a text about once a week from a friend claiming to have seen you around the Atlanta area, so if I could get a confirm or deny on a recent P.F. Chang dinner and front desk Embassy Suites check-in.) (Oh shit Jason while I still have you, I wanted to talk to you about your recent sampling of Robin S.’s “Show Me Love.”) And so it goes. Ashanti, Jason Derulo. Never again will Enigma or Baz Luhrmann’s “Everybody’s Free (To Wear Sunscreen)” happen. This is not a slam on the current state of pop culture, it’s a simple acknowledgement that an era is lost forever, one without the readiness of the internet to tell you just how lame a song is and why. I once asked Yahoo! Answers this question: “Does anyone here like radio? Just in general, radio: good or bad?” Overwhelmingly people loved radio. In this case, this progression, it’s neither good nor bad. Just different. And cooler. Erin Sullivan lives in Portland, Oregon.
https://medium.com/the-hairpin/why-enigmas-sadeness-could-never-exist-today-1227d6cd36b8
['The Hairpin']
2016-06-01 11:52:01.458000+00:00
['Erin Sullivan', 'Enigma', 'Music']
Bridging the Gap Between Developers and Marketers with Rich Mironov
Bridging the Gap Between Developers and Marketers with Rich Mironov Episode 48 Have you ever been told to be more “innovative” with your code? In this episode of Programming Leadership, Marcus and his guest, Rich Mironov, discuss the all too common disconnect between developers and those on the marketing side of organizations. According to Rich, this is the result of two very different work cultures existing in the same organization — one that’s collaborative and one that’s highly individualistic. The culture gap can be hard to cross. Thankfully, Rich has spent years coming up with solutions to bridge that gap. It’s not always easy, but Rich believes that it can be done through a better understanding of how the two cultures work along with constant education and communication. Show Notes Differences in design principles between product and engineering management (1:35) Understanding the conflict between makers and marketers (6:22) How Rich helps marketers/sales develop a more useful frame for engineering (10:01) The “Innovation” Misconception (15:36) The culture gap between sales and development/product teams (21:46) Where does product management fit between sales and development? (26:31) Helping clients make effective organizational change (32:48) Links: Transcript Announcer: Welcome to The Programming Leadership podcast, where we help great coders become skilled leaders, and build happy, high performing software teams. Marcus: All right, welcome to the show. A bit of housekeeping as we begin, if you enjoy this podcast support us by leaving us a review. The stars matter a lot, the words are great, too. But our sponsors really like to see that people are listening, and we do too. Otherwise, we’re not sure anybody is actually out there. So, leave us a review on whatever platform you’re listening to this on. My guest today is Rich Mironov. Rich. Welcome to the show. Rich: Thanks so much. Marcus: Rich since I flubbed the first introduction and this is take two, would you introduce yourself for us? [laughing]. Rich: Sure. My name is Rich Mironov. I’m a 30-plus-year veteran, in Silicon Valley, of enterprise software product management. And these days, I coach heads-of-product. And I do a fair amount of design work on product management organizations and, sometimes, the larger product engineering design problem of how people fit together. And I’ve been a fan of yours for a long time, been following your writings and your daily send-outs, and it’s a pleasure to join you. Marcus: Well, thank you, and I have really enjoyed your writing as well, you just write such good stuff. And we’re going to have a link at the end to the website where people can go and find more of your wonderful, wonderful content. But you really touched on something, I think, is so important, and that is this idea of designing organizations where people can really effectively work together. And I know that you work in the product management space oftentimes, and I know that people in the show also represent some of them, the engineering management space. Do you see a big difference between those two organizations in terms of how we think about the design principles for the organization? Rich: I really do. And I think they’re adjacent, they’re related. But, mostly when we’re thinking about engineering organizations, we tend to want to talk about throughput or volume metrics. How efficient is the team? Can they get work done? I’ve never actually met an executive who told me that the development organization was fast enough. We know that, in the history of the world, we always believe that we’re one sprint, or one more hire, or one more release away from getting all the things we want. And honestly, it’s never been true. It’s never true, it’s never been true, and there’s this false economy that says the problem is throughput. The problem is speed. The problem is velocity. And when we put on our product management hats, we’re asking a very different set of questions. Are we building the most important thing or things? Because we know that the more we throw at the team, the less we get done. If we got nine projects, we’re going to finish none of them. So, if we were only going to get two things done this quarter that are really important, have we chosen the right ones? Key product question; exclusive OR question. Have we done good validation with our users and buyers in our market, so that when we shipped something beautiful, and perfect, and well-tested, and with lovely workflows, if nobody buys it, it doesn’t matter how good that work was? And so product folks are worried about the questions of, is there an audience? Will they pay for it? What about the competition? How do we make money, or monetize this if it’s supposed to make money? And then at the far end, how do we explain to the world what it is because, again, it may be obviously true to some people, but most of the world is not software development engineers, and so when we have the development team describe what they’re doing, most of the world doesn’t understand and then we fail. So, the product problem, I think, brackets the development problem, which is at the front end, are we doing the right things, in the right order, for the right reasons, with goals? And on the back end, are we delivering what the market wants and measuring success so we can tune and iterate the next time? Marcus: It’s just fascinating because you’re exactly right. You’re one more — the quote-unquote, “one more syndrome” problem, we just need one more hire, one more sprint, one more server, one more of whatever, and that most of these engineering organizations are thought of as, how can — I hear the word streamlined. How can we streamline it, which sounds like a car, but of course, an engineering organization is actually nothing like a car. It’s a complex adaptive system. It’s an organic thing. Rich: And I think there’s this weird, wrong analogy where we think of building software like building fences. Marcus: Amen to that brother. Yes. [laughing]. Rich: That’s right. Because all we need are a piece of land, and something to dig holes in, and generic labor, and we know just how long it’s going to take to build a fence. Whereas building software is perhaps the most creative and complex thing that people do in the world, or certainly up there. And, for me, it’s much more like trying to write a hit song or a great jazz solo, than digging ditches or going down to Home Depot to get a few folks at 10 bucks an hour for cash to sit in front of keyboards. Marcus: Oh, absolutely. In fact, I used to tell my clients — it’s funny, I used to say — because clients would approach me, and at my company, we built software for them. That’s a very common business model in the world. And I would say, “Now owning this software is more like owning a puppy than owning a fence.” That’s exactly what I would tell them, and because I wanted to set this expectation that there’s a lot of care and feeding involved. It’s not just every 10 years, you give it a whitewashing. Rich: That’s exactly right. And we have this weird duality where on the one hand, we want to whip the development team faster, because we think if we could just get it out a week earlier, somehow it’s better. And then we’re all grumpy and complaining when folks aren’t excited by what we ship. Marcus: Now Rich, I see a lot into the engineering side of organizations, but you see a lot into the product management side of organizations. Does product management have the same flawed mental models about post holes and whipping — whipping sounds terrible, but is there those same kinds of analogies or pressures at play? Rich: I think good product management teams are the — they’re the interface. They’re the connector between what I would call all the makers: the developers, the designers, the tech writers, the test automation engineers, all the folks who actually do work. And generally — because I’m almost always working with software companies proper where the thing we build is software, and if it isn’t good, we’re out of business. But the rest of the house has this very simplistic view that, for instance, selling is hard but building software is easy. And so, what I see and what I spent a lot of my time doing is pushing back on the executive team on the marketing and sales side of the house to continually and repeatedly push the concept, which they’re not excited about, that building software is more crafting art than it is science and repetitiveness. And that’s just — it’s a concept that’s got a lot of resistance to it. Marcus: Yeah, how do you think we got here? I just think it’s fascinating. Rich: Part of it, I think if we go back to the classic IT organization — and we’ll save this up for a little later because I don’t work with IT organizations — I only work with engineering teams, because engineering teams are profit centers, and IT groups our cost centers. And if you go back 30 years and say, well, the IT folks are the ones who configured my laptop and wrote SQL reports against the customer database. And neither of those was conceptually difficult, or even really perhaps that challenging, and so if I wandered into my little IT group, and I said, “Hey, we’ve got a new employee, I need a laptop by noon.” “Okay, we can do that. Did you fill out the form?” And, “I need one more report. Can somebody bring up whatever, SQLite Report Writer we have because I need to switch the columns and put a total at the bottom.” And those have not just only the feature that they’re straightforward, but we believe our senior users when they tell us what they want. So, if you’re the consumer of that one report, “Well, I guess you really know what you want, even if you’re not sure what you’re going to do with it.” And if you are the employee who needs that laptop, okay, you need a laptop. That idea that things are completely predictable, that they are repetitive, that they’re a manufacturing process, more than that, that if I asked for it, it’s because I’m entitled to it and I know what I want, are absolutely, for me, fundamental failures on the part of almost everybody on the non-makers side of the house, who are trained to write down on a post-it note what some random customer or buyer told them in eight words or less, assume it’s self-explanatory, it’s well thought out, it’s strategic and everybody wants it, and then walk that posted note over to some product manager, waiting for them to leap up and say, “Oh, yeah, I’d love to do that. We’ve been sitting idly just waiting for somebody with a good suggestion. And now we can feel fulfilled. Because we were just sitting around waiting for somebody to ask for something.” Marcus: So, you mentioned that marketing and sales really misunderstands — they have the wrong mental model about what engineering — the whole way products are built. So, as somebody who’s working in product management, how do you start to help them have and create a more useful model that is beneficial? Rich: I think there’s really two pieces to that. One is to recognize that it’s not their job to deeply understand the development process. So, we hire salespeople because they’re great salespeople, because they’re optimists, because they’re relentless, they never take no for an answer, you know, it’s not selling until the prospect’s said no three times. They’re great at escalating within the prospect’s organization if they don’t get what they want — which is whenever they come to a product manager and ask for something and get told no, you can count to 10 and know that they’ve gone to your CEO to go over your head — but that marketing and sales and a lot of the other groups, finance, it’s not their job to deeply grok how we build stuff. On the other hand, we need to endlessly teach and share and model that because the bad assumptions lead to lots of bad decisions. So, things I use, I look for the simplest possible tools. I have a paper version of a simple Kanban board that I carry around all the time, where for each team, or for each piece of the development organization, there’s a row, and in the far right column are the two or three most important things that we’re currently in building, so they’re in full development. The next column to the left, that is the things we’re doing architecture and design on because before we start full build, we should probably know how it’s going to work. And the column to the left of that is the validation column where we need to be going out in the market and finding out what good ideas are really not good ideas. And the reason to have it in a really simple printed form is because I’m always getting pulled aside in the hallway and somebody comes to me and says, “Oh, I need this thing. It’s really simple. It’s probably only 10 lines of code. I know we’re agile. Can’t we fit it into the current sprint? How hard could it be? We need to add teleportation to our ERP system.” Right? Marcus: [laughing]. Right. Rich: And rather than be the person who curses them out, and tells them they’re stupid, and kicks them in the butt, and offends them, what I want to do is I want to pull out my little printed version of my Kanban board — and it’s printed because I have no time in the hallway to go log on somewhere. If it’s an electronic system, I know nobody on the sales side will ever look at it. But we get to ask the question of which of the two things that this team is working on, that we agreed on Friday at the executive staff meeting are the two most important things for the company, do you think we should displace or delay or cancel for your new good idea that you thought of in the shower or in your commute in? And then, we agree that those things are really important. So, then we move one column to the left, say, “Well, okay, we could put your thing in, and put it right into design and architecture, but which of the two things that are waiting to go into full development that we’re doing design architecture, do you think aren’t important or aren’t as important?” Notice that keep framing this exclusive OR, but no one on the sales side believes in the existence of exclusive OR. And then we agree that the things in design or architecture are really the next good ones up. And so now we agree that we should put your good idea into the validation queue, and find out if it’s not just your good idea, but maybe other folks in our customer base or market care, and maybe they don’t, and so the best time to decide to not build something is before we start full development. So, that’s a whole dance to repetitively push into people’s faces in a polite way the idea that there’s not infinite capacity, the team’s not idling, waiting around. We have a plan. We can change the plan. But the things that are in the plan are the ones we’ve thoughtfully chosen for good reasons, and until we get a similarly good reason, we’re simply not going to throw the current work out, leave ourselves half-done, double-up on the whip, do all the things that development hates. Marcus: Yeah, I really like that because it also tells me that nothing on the board was just thought of in the shower and tossed in. You mentioned these phases, validation phase, and we have planning and architecture, and design. Those all mean that there’s a process that people are going through. And yeah, and things just don’t get done on a whim. Rich: Well, sometimes they do. So, in almost every organization, there’s somebody at the top of the organization chart who retains the right to walk over to the product and the development teams to say, “I know we had a plan, but there’s the system down, or there’s a big deal or I had a really good idea, and I’m the boss of you.” And so, we have to be prepared for changes in the plan, but we want to buffer them, we want to slow them down just a little bit because the rate of seemingly good ideas is 100x or 1000x our throughput. And so we’re never ever going to be in a place where good ideas just walk up to us, bite us on the butt that we didn’t see before. And we throw them into the mix. That assumes all kinds of things that don’t exist in the real world. Marcus: Well, I want to kind of turn back to something we talked about. So, I’m imagining this very important person who begins thinking, “The right way, the way I can really get something done is just by telling someone, I’m the boss, you’re going to do it.” And my guess is this reaction to things not moving the way they want them to. Not moving fast enough, not being nimble enough, whatever the framing is, there’s some dissatisfiers, they decide to break glass and pull the fire alarm — and they’re going to use their power. And I have seen, and I’m wondering if you have too, that after a while, even when that starts to not work, sometimes people think some larger organizational change will fix the problem. That we’re just not structured right. And I wonder if you see that as well. Rich: I see that in a lot of places, and often it’s described to me in very qualitative terms as we’re not innovative enough. Marcus: Hmm. So, that would be the thing they want to increase. Rich: That’s right. And generally behind that, if you say, “Well, why do you think that? Or, how do we know?” There’s usually some random set of inputs from the sales, or the marketing, or the support teams about stuff we haven’t done. The support organization always has a list as long as my arm of their top issues and bugs, and just as every system has a bottleneck, we know that if we fix the number one top bug, something else is now the number one top bug. Marcus: There’ll be a new number one. Rich: Right, and so even if we put 100 percent of all of our product, and design, and development, and engineering effort into fixing all the things that support wanted, we might never get to the bottom of that list. By the way, if we’re in the software business, we’re now out of business because we’ve neglected competitors, we’ve neglected new markets, we’ve neglected new features, we’ve just done the sustaining stuff. And ultimately, everybody walks away. Likewise, and again I’m mostly on the enterprise side, B2B, I’ve never met a B2B enterprise customer who didn’t want a few things that weren’t in my product. And usually, it’s a pretty short list of no more than eight- or nine-hundred items. [laughing]. And I know that anytime, we lose a deal — by the way, when we win deals — you may not know this, when we win deals, enterprise sales reps are able to explain that it’s because they are great sales reps. And when we lose deals, it’s either because we’re missing a feature, see list, or the price was too high. Those are the two reasons why salespeople explain we lost deals. So — Marcus: Now this is a fundamental attribution error, right? Rich: Yes it’s — Marcus: It’s psychology. Rich: — absolutely fundamental. And I’ve never met a good salesperson who admitted that they had any hand in losing a deal. It’s not who they are. It’s not what they’re paid for. It’s not how they’re rewarded. It’s not how they’re promoted. The good ones get to go to a club and Fiji, or Hawaii, and drink a lot and sleep around and do things I don’t know about because I’m not invited. But back to the innovation question, there’s an endless list of deals we didn’t win. And attached to every one of those deals is a handful of things that we either didn’t have, or we got told we didn’t have. They may not even be real, the customers may not even need them. Nobody’s vetted them, but there’s an endless list of stuff we didn’t do. And so the misconception here is, if we just got all of those things done — one, we’re never going to get them done and two, if you’ve ever tried to find a feature that you couldn’t find in Microsoft Office? Marcus: Oh, yes. Rich: It’s the result of 30-plus years of let’s add one new feature. And when you add all those features, your product becomes useless, it becomes unusable, it becomes worthless. And so the product problem here, as opposed to the development engineering problem is, how do we hold back all of the seemingly good ideas that are going to incrementally make my product less useful, less good, less friendly, harder to fix, harder to build? How do we push back on the one-off feature that’s just for one big customer that’s going to cause us to create another code line? How do we hold back the chaos, such that we can stay on track with building a product that the market wants, in volume, that’s usable, that’s beautiful, that’s good that’s tested, that works, in the face of the other half of the organization which endlessly forgets limitations and wants one more thing? So, there’s a lot of education here. Again, I don’t actually believe that the senior execs at my company will ever really embrace this thought. But it’s my job to keep reminding them until they’re starting to have some reaction. Back to your puppy analogy, you know, if your puppy pees on the carpet, and then you immediately give your puppy a treat, right? Marcus: Yes. Rich: We know how this works. And so when we have an enterprise sales team, where the salesperson came to the product team, and got a no, went around them to engineering, got a no, and then they went to the CEO who overrode product and engineering and gave them a yes. We’ve now established the peeing on the carpet model, where senior experienced salespeople at our company now know that the way to get things done is to escalate through the CEO and to override and to jam something into the backlog, in the top into the sprint via the CEO or the VP of sales. Because product and engineering don’t give me what I want. Marcus: I have seen that many times, and it happens not just for sales. There’s a lot of different groups who find that whether it’s throwing a fit, or going around people, there’s reasons all these things happen. Rich: And there’s good reasons for some, but as a behavioral model, it fails. Marcus: It fails. Now I’m curious, I want to turn back just a moment. So, we’ve been talking a little bit about how these two organizations, we’ve talked about sales and product, how they interact. And I’m curious, do you see — because I really have no idea — sounds like you have a lot of visibility into the sales organization — do they do reorgs as often as I see engineering or products doing reorgs? Rich: Not so much. They do something that is generally smaller than that. So, there’s a lot of rearranging of accounts or territories. Marcus: Oh, yes, I’ve seen that. Rich: Right. So, as your company grows, and you go from 10 salespeople to 20, you end up carving up smaller territories, and giving those folks more depth, more quota in their areas. And then you’ve got regional structures, and there’s usually some helpers. There’s like field sales engineers and other people who assist in the selling cycle. But I think — so it’s less about blanket changes, unless you, for instance, move from a channel model to a direct selling model, where you dramatically change your product set. Or when I see companies move from on-premise software to SaaS software, it gets a very different — the selling cycle, the teams, the — so when there’s major change like that, I see that happen, but there’s a lot of shuffling around. The other thing to note here is most salespeople are paid quarterly. Marcus: Like, quarterly bonuses or quarterly, I’m sorry, can — Rich: Well, so — commission. So I either hit my number or I don’t for the quarter. At the end of the quarter, I get a big fat check if I have hit my number, and then I start fresh. Now, I may drag some prospects from quarter to quarter. But it’s much more discreet in the sense that you could pick me up at the end of the quarter and plop me down on a new product or new territory, and yeah, I’ve lost some continuity on some things, but usually, you give me a little slack for that. And then I sell on what you give me in the territory you give me in the channels that you give me. And so there’s a lot of people leaving at the end of the quarter after they collect their big bonus, their commission check, to go do something else or go do it somewhere else. Marcus: I feel like I’m — I just watched in the last month, Glengarry Glen Ross — “and the leads are terrible!” Or — Rich: The leads — that’s right. And I keep that clip, that brilliant clip with Alec Baldwin who’s in that movie, he’s young and gorgeous. explaining that first prize is a Cadillac — Marcus: Second prize is the steak knives. Rich: Second prize is a set of steak knives, third prize — Marcus: You’re fired. Rich: — you’re fired, right. And I play that clip for a lot of my development teams and my product teams because folks simply don’t believe that that’s how life works on the sales side. There’s such a culture gap. There’s such a perception gap that it’s less fiction than you would like. Marcus: Yeah, and we laugh about it here, and I’ve always kind of smiled at it. But it is an individualistic competition between people. They ra — like they think nothing of the ranking, right? the sales contest that is — Rich: They live for the rankings. Marcus: Absolutely. That is the standard way they — so therefore, and I’m just — the idea of let’s organize our group to work better together, doesn’t matter because everybody’s just an individual over there. Rich: That’s right. And there might be small teams. For instance, there might be a senior sales rep, and a sales development person, and an [SE 00:25:16] as a team, but every sales organization has thermometers on the walls and competitions and spiffs. And it’s all about, were you in the top 10 percent? Did you beat your quota? Did you get to go to president’s club for the quarter? You get the little plaque on your desk. It’s intensely competitive. And so the idea that, for instance, you would have a development team, where we don’t call out an individual and say, that’s the best person on the team. We don’t have the star of the quarter. It’s bizarre. It’s weird. It’s — what do you mean? Because it is — sales is much more of an individual sport. And, we go to such great lengths to keep our teams working as teams, to reward the team, to bring out the team to the escape room, or the pizza, or the trip or whatever it is, to have metrics for the team, to help each other. In the standup say, I need help, is there somebody who can give me a hand? You don’t see that in the sales organization because it’s all personal score. Marcus: Yes. I can’t even imagine a sales stand up. People would just not want to say [laughing] exactly right? To heck with you, buddy. I’m not revealing anything. Rich: That’s right. Marcus: Okay. Well, let me ask this. So, I’m imagining this along a spectrum so here, we’ve got the sales group; very individualistic, lots of competition. It’s an individual sport. And then we get engineering where more and more we’re hearing, and I think a good thing is, software development is a team sport and we want people to collaborate. We have mobbing and swarming, and so where along that spectrum does product management fit? Rich: We get a bit schizophrenic. So, maybe one of the most important characteristics, or skills for a product manager is to vary their language choice from meeting to meeting. And so when I’m sitting with my development team, I need to both sound and act like somebody who fits in, who thinks about the long term, and the next release, and the backlog, and the process, and all this stuff. When I’m that very same person, and I’m in a meeting with a customer, or prospect, I’m mostly talking about benefits, and return on investment, and value, and why they should sign a piece of paper and give us a bunch of money for our stuff. And when I’m talking with my executive team, I’m mostly talking about money. Current costs, future costs, what’s this next release work? Why is that feature we’re going to add going to lead to more upsell, and revenue, and reduce churn. So, there’s this language mush where product folks have to, throughout the day, express the same ideas, and the same goals, and the same metrics, and the same intention in radically different language that matches up against what each of the groups cares about. So, when I’m talking with my development team, I’m always trying to insert motivation. Here’s what I heard from the last customer call, here’s the interviews, how are we doing on numbers and sales, I want to have a recording of somebody either happy or complaining about something in the product because, honestly, my team doesn’t want to hear from me. They want to hear from the real end-users who matter. So, inward-looking, I’m always trying to inject reality, and market sense, and joy, and to be able to tell my team that the work they did was wonderful. And here’s a couple people talking about our new workflow, or whatever we fixed so that they can feel important and successful. But when I’m sitting with the sales team, I’m basically giving them a checklist that says, here’s the five or six qualifiers for our product. If you’re talking to somebody who doesn’t check five of these six boxes, move on and stop wasting your time. Because you’re trying to sell in the wrong place. And those concepts have to match. But the language is entirely different. Marcus: Right. Do the product managers typically work together or — the way you’re describing it, I’m imagining me, if I were a product manager, in the morning I’m with engineering, I’m creating empathy and motivation. Later I’m with the sales team saying this is the perfect market, just ignore people who aren’t in it. And in the afternoon, I’m with the executives. But it’s all me in each place. Is there group work in product management? Rich: There is some, and it depends a lot on your product set. So, if you’ve got five relatively standalone products, and five product managers assigned to those who all work for, let’s say, imagine I’m the head of product or the chief product whatever, I’m going to have five people working for me, and the interactions are actually pretty light because we really only have to get those folks working when there’s shared infrastructure, or competing goals, or one product or pieces going to support the other. Now, but if I’m in a suite, where there’s two or three layers of architecture, and a bunch of enabling technology, and maybe we sell one big block of stuff, but it’s broken up into 15 or 20 value streams. Now, my product managers have to be much more collegial, collaborating, we have to have shared goals. So, in the same way that there’s no one perfect organization for every company, if the company, and the products, and the customers need product managers to be much more collaborative, then we’ve got to force that to be in place because they are naturally solos. They tend to be a bit lone wolf. If you think about parents of kindergarteners right? Marcus: Okay, I’ve got one in my mind. Yes. Rich: Right. I don’t if you’ve ever met a parent of a kindergartner who didn’t tell you that their kid was the smartest, and the best athlete, and the best looking, and everything else, Right? Marcus: Of course. Rich: And when they get a report card back, their kid is not at the top of the class. They’re in, yelling at the teacher. Product management is a lot like raising children. Marcus: Ohh. Rich: Ohh, yeah. You, the parent, have to have a plan when your kid’s born to send them off to university someday maybe. Marcus: Your goal is to get them to leave well. Rich: That’s right, but if you think about the next 18 releases — Marcus: [laughing]. Right. Rich: — and the college fund you have to start now. Put that in protected category, if you don’t start saving for college till your kid’s 16, fewer options. You know that your newborn, your one-year-old, can’t yet play the violin but if you’re going to get your kid to Carnegie Hall, you’re driving your kid to a lot of practice, practice, practice. And so the product manager has to take the long view, has to protect that product from all the outsiders, has to give it a chance to thrive and grow and find its own market. Every product’s different, they’re all going to find their place, but almost all products are terrible in release one. Microsoft taught us that nothing works till release 3.1 — Marcus: 3.1, right. [laughing]. Rich: — but, if you let your product be shut down because in version one, it wasn’t very good, then you’re not a product manager. I have to defend, and protect, and plan, and it’s never a perfect plan. It’s never right, but how do we find, for some early customers and then some later customers, how do we grow the revenue line? How do we do all the things that we have to do over the long term in the face of organizations that think very short term? Marcus: Yeah, absolutely. Okay, in the last part of this episode, I want to turn back to some of the organizational change things that you and I were talking about because before we hit record, we were having this really interesting conversation. So, I want to loop back and include the listeners. So, one of the things we were talking about was how creating organizational change, or are effective — maybe that’s the better way to say it — effective organizational change is more than just a new org chart, and then announcing who reports to who. So, when you see your clients wanting to make an organizational shift like that, how do you help them think about it in a way that’s more useful? Rich: Got it. Yeah, so let’s take an example that I run into all the time. So, let’s imagine an organization that’s doing the narrow version of Scrum, where you’ve got a product owner, who only does what the scrum book says, which is, is available to the team twenty-four by seven, writes stories, accepts stories, and never ever, ever, ever speaks directly to a real user or customer, gets all their input from stakeholders and proxies, and other four-letter words. And then they also have somebody who’s charged with some kind of product title who looks out at the world and comes up with product strategy, and hasn’t a clue how anything’s built, never has to make an OR choice, assumes the development team is underperforming — because they don’t get what they want all the time — believes that roadmaps are perfect and good predictors of the future. And so, I look at organizations that have this particular split between a product owner and a marketing outward-facing product manager and I observe that they get pretty poor stuff built. If you’re in the software business, that’s no way to behave. So, I come in and say, “Well, we need to merge, we need to have a single person who does the end to end product management job, which is half inward-facing, doing the product owner content or role, and half outward-facing spending lots of time with customers, and prospects, and partners in interviews, and competitors, so that when we write a user story, it turns out to be not just grammatically correct but useful.” If you’re not in the field, I believe most of the user stories you write aren’t very good. So, easy for me to say, “Well, let’s just reorganize everything.” But then you look around and notice that you’ve got 15 teams with 15, prod — well, sorry, 15 teams with 11 or 8 product owners, so we’re already short, and you should have had 15 outward facing product folks, and you’ve got 5. So, we’re short-handed, and most of the product owners don’t have, yet, the skills or experience to do customer interviews, market sizing, business cases, argue down sales and marketing. They are in an order-taking role because that’s how the old IT thing worked, and the outward-facing market folks or product folks are product light, their engineer light, they couldn’t find their way to sprints, and scrums if you gave them all the consonants in the right order — Marcus: But they do think it’s like building a fence, probably? Rich: Yeah, how hard could it be? So, I want to come in and I want to merge those two jobs. But I know that most of the folks in those current jobs don’t yet have the skills, and we’re short. So, rather than fiat from the top, thundering from the mountains and we throw down two tablets, what I want to do is I want to find one team, with one product owner who seems to have the right stuff, who I can coach and mentor, and we’re going to take one team and we’re going to rearrange what they do, we’re going to take the outward-facing market product person and move them on something else. We’re going to take that one, opportunistic product owner and give her or him a bunch of training, and help, and support, and we’re going to try to show over the course of, say, three months that that works better. It’s classic agile retrospective, think about your situation. Because if we simply declared that everybody was in a new job, we’d find that nobody knows what that job is. Most of them either lack the skills or the interest. And everybody’s going to revert back to their old behaviors. Marcus: But Rich, we gave them job descriptions. Rich: Yes, that’s great, and the HR folks tell me that if we write a job description, everybody will read it and understand. Marcus: So, maybe that’s not a perfect solution [crosstalk] — Rich: Not a perfect solution. In fact, in that job description, often you’ll find — you’ll be told that product’s in charge of the what and development or engineering is in charge of the how. Now, I’ve actually never found that to be useful in practice, because they’re deeply wound around each other, and my team resents me when I come in and I tell them how to solve the problem, or what the problem is. We get much better results and much happier people if we collaborate together on making sure we all understand or agree on the problem before somebody writes down the solution. But the HR thing says, “Yeah, yeah, products in charge of what, and engineering is in charge of how,” and that just creates, definitional argument where, if I think it’s mine, I’ll tell you, it fits in my category. Marcus: Right. Let’s back up just — I really like the way you frame this organizational change, you’ll find someone who’s doing something that resembles what you want. You will work with them. You’ll remove the person or the role that isn’t useful in the new way, and you’ll work with them to increase skill and to increase understanding about that role. For example, you’ve got somebody who’s normally facing inward, and you say, Well, you’ve got a little bit of outward-facing thinking going on. You’re really good at this. People seem to see that you’re also, probably, somewhat influential in the organization, so, let’s have you be are — and this little group is where you start that fire. Is that the way I’m hearing it? Rich: That’s right. That’s spot on. I think back to Tom Sawyer. So the Tom Sawyer answer is how do we make it such that everybody else wants to paint my fence? And I think every good development side, agile transformation starts with a couple of groups that do it well, that get good coaching, that gets support, you get protection and umbrellas from above, so that everybody else can look at it and say, “Well, I want to be one of the smart kids. I want to play with the new toys. How come I didn’t get to do Kanban or Scrum or whatever thing we’re doing? How come they got to do this, and I didn’t?” When you’ve done it a little bit successfully with a small team, you can seed the concept, you can share the concept. And now people want to play. I observed that I can’t make anybody do anything. I can only make them want to do the things I want them to do. Marcus: Okay, I have to grok that. I can only make them want to do the things — Rich: That’s right. Marcus: — I can’t make them do the th-, they have to make themselves do the things. But maybe I can paint a picture — I was just taking a course on systems thinking for leaders at Cornell, just last night. It talks about sharing the party photos with the fence-sitters and naysayers, the party photos being like, look how great this is at the party. And therefore you start to say, come on over this party is fantastic. Rich: That’s right because anytime we change roles, or change jobs, or change any of this stuff, there’s resistance, there’s confusion, there’s the need for getting folks along in the right way. As a for instance, whenever I pick up one of these product owners who’s been all inward-facing, they don’t have experience pricing products, or packaging products, or end-of-lifeing products, or rolling them out, or doing migration plans, or there’s a whole stack of things that are going to come up in those first three months, and rather than throw them in the water and see how cold and deep it is, I want to be there to say, “Ah, next week we’re going to spend a couple hours talking about incremental release planning and upgrade planning, because you haven’t done it and you need it. And after we do it once or twice, you will have that skill, and nobody will have to help you anymore.” Marcus: This is a beautiful reason to have somebody like you around, I have a feeling, that you light that fire, and you also enable the organization to get better, and you teach them these ways of self-modifying their structure. Rich: You bet. For me, this is fun. Marcus: Yeah, that’s really interesting. Rich, what else is on your mind right now? We’ve got a few more minutes. Let’s find a last quick topic here. Rich: A challenge that I face every day that I was just on the phone with yesterday, somebody about, was professional services organizations versus product organizations — Marcus: Yes. I have some — Rich: — and let — Marcus: — comfortability. Yeah, I have some experience with both. Rich: And let me define carefully what I think I mean by that, which is a professional service organization is one where individual clients come, show up, or the sales team brings them in, and they have a thing they want you to build, and you build it once, and you charge them more than it cost you to make it. Marcus: That’s profit. Rich: And we call that margin or profit. And then the next thing comes in and we blow up the team and we put a new team together to build the next thing somebody wants. And so it’s a sequence of one-off custom, sometimes they say bespoke, right? Marcus: They do like to say that these days. Rich: That’s right. And the way you make money is you keep your technical team busy and billing. And there’s never been a project that came in the door that we said no to because the way we make money is we hire more people, and we put them to work doing whatever individual clients want us to do. Marcus: And we call that a scalable or elastic workforce. Rich: Okay, good, great. Now there’s a name for it. In the product game, we do exactly the opposite. So, the goal of a product is that we build it once — and let’s pretend it’s software because that’s where I live — we can — and by the way, it cost us $5 or $10 million to build that first unit, but the second copy we sell costs us nothing. And the third copy costs us nothing. And so, in the product business, it’s all about finding a price point that the market wants, figuring out what the breakeven is. Okay, we need 100 customers to break even. The hundred and first customer is almost 100 percent profit, but what that means is we have to first do our homework. And so professional service organizations don’t have product managers. Marcus: No, and they don’t have homework. They’re mercenaries, they are hired guns. Rich: They’re mercenaries, right. And so they have project managers who make things deliver on time. But if you want to give him $100,000, or $500,000, to build something, the answer is yes, right? Marcus: That’s right. Rich: In the product business, it’s the reverse. We don’t want to start building anything. Unless we know there’s 100 customers for it. And so we can’t take any one customer’s word for that. We have to do our homework, we have to do our research, we have to do our market sizing, we have to see what the competition’s doing because we’re going to charge less for each copy than it costs to build it. Because that’s how the software product business works. And so it’s all about volume, it’s all about resisting one-offs and specials. It’s all about keeping people in a small number of packages or prices, single units, because we need to sell 100 or 1000 or 100,000 because that’s how you make money in the software product business. But that’s a tremendous — conceptually for folks in the services side who think they want to get into the product business, because they put a product team together. And then a week later, some new project comes in and we say, oh, we’re just going to borrow a couple of those folks for just a few days, Marcus: I have literally done this. Literally at my company, and we tried for years and kept getting, quote, “interrupted” with paying customers waving dollar bills at us. Rich: That’s right. And either of those models is good. But the intersection of those models is a failure. Marcus: It’s terrible. Rich: It’s terrible. Rich: In fact, the only way we — we actually released an iPad product — the only way it happened was that our biggest customer unexpectedly disappeared. And we had this wonderful mobile engineer, and we said, “Well, we bet some more work’s going to come along,” and we were able to fund that person working for four months, and then release a small iPad project uninterrupted, but had anything come in — Rich: That’s right. And if you have a mostly services company, and you’re trying to build something big, it’s going to take a team of nine a year to do it. And then, just this once, every Tuesday, Thursday and Friday, we borrow just one or two people from the team — or maybe all 10 — everywhere I go, I see the same pattern playing out, where we think we’re in the product business, but we’re not. Marcus: Knowing what business you’re in, I’ve found, is extremely important, and not falling for the grass is always greener fallacy because you’re right, both businesses are great, and you can make a lot of money, but if you get confused, it’s death. Rich: That’s it. And so I’m always encouraging folks who want to get in the product business to find a partner company that’s in the services business. So we can do the product work, and we can throw all of the hourly and project work to our partner. And they get to make money, and we get to make money. But we have very radically different organizations, managed differently, goals differently, hired differently. I can look at an org chart and tell you which of the two kinds of companies you’re in. Marcus: Mm. Fascinating. Okay, you heard it first; if you’re trying to be both, stop it. [laughing]. If that’s all you walk away with if you’re a product company and you think, “Let’s offer some services,” or you’re a services company and say, “Hey, we build products for other people, let’s just do it for ourselves,” just don’t do that. Rich, it’s been a real pleasure to get to chat with you today. Where can people find you online and engage with your work? Rich: I’ve cleverly gotten a domain name that’s the same as my last name. So, I’ll spell it because it’s hard M-I-R-O-N-as in Nancy-O-V, as in Victory. So, I’m at mironov.com, and I’ve cleverly taken the email handle rich@mironov.com. Marcus: That is clever. Rich: So, easy to find, and there’s 18 years worth of blog posts and videos and tools and templates on my website, so everybody should take that and just grab whatever is useful. Marcus: It is a treasure trove, absolutely. Thank you so much for being on the show with us today. Rich: I loved it, thanks for letting me join. Announcer: Thank you for listening to Programming Leadership. You can keep up with the latest on the podcast at www.programmingleadership.com and on iTunes, Spotify, Google Play, or wherever fine podcasts are distributed. Thanks again for listening, and we’ll see you next time. The post Bridging the Gap Between Developers and Marketers with Rich Mironov appeared first on Marcus Blankenship.
https://medium.com/programming-leadership/bridging-the-gap-between-developers-and-marketers-with-rich-mironov-21f9dabebfe0
['Marcus Blankenship']
2020-07-09 07:08:23.419000+00:00
['Management', 'Technology', 'Leadership', 'Software Development', 'Startup']
A Step-by-Step Guide to Making Sales Dashboards
I have written some getting started guides on data visualization o business dashboards, but those articles are still a little theoretical. Recently, I have been working on a sales dashboard project for data visualization. From the determination of indicators, layout design, chart types to dynamic effects, I summarized some experiences and opinions. In this post, I will share with you the specific steps to make a sales dashboard. 1. Tools for Making Dashboards The production of dashboards can be done with code development or existing visualization tools. Many people will use JS and Ecahrts, but it usually involves data support, background response, real-time update, platform operation and maintenance, etc., more technology is needed, so I won’t go into details here. Another way is to use off-the-shelf visualization tools. It is relatively simple and efficient to build a business dashboard with tools like FineReport or Tableau. Due to my own work needs, I am more familiar with the operation of FineReport, so what I will show you in this article is how I use FineReport to make sales dashboards. 2. Determine Analytical Indicators Watching the sales dashboards, you may be attracted by the cool visualization. However, keep in mind that the dashboard must be based on the presentation of data, and any cool effects cannot affect the effective display of data. So, we should first considerate which data and which indicators should be put on the dashboard. You can use the key metric decomposition method to determine which data to display. Step 1: Identify a key indicator. For sales dashboards, your total sales must be the most important, which is the theme. Step 2: Decompose the key indicator from multiple dimensions, that is, break down your sales. From the time dimension. What is the sales situation for each quarter or month? Why are the sales particularly high in some time periods? What measures have been taken? From the geography dimension. What is the sales situation in each region? What is their ranking? From the planning dimension. What is the difference between the current sales and the previous plan? From the proportion dimension. What is the sales of each product? Which are the most profitable star products? 3. Design Layout Under normal circumstances, an indicator monopolizes an area on the dashboard, so through the definition of key indicators, you know what content will be displayed on the dashboard, and how many pieces the dashboard will be divided into. And you can determine the importance of the indicators based on the business scenario and then design the layout of the dashboard. The purpose of the layout is to reasonably present business metrics and data. There are primary and secondary indicators. The primary indicators reflect the core business, and the secondary indicators are used for further elaboration, so they are given different weights when designing the layout. Here I recommend several common layouts. 4. Select Chart Types After the key indicators are determined, we need to determine the analytical dimensions of the indicators. Whether the data can be analyzed thoroughly and whether it can provide support for decision-making depend on the analysis dimension of the indicator. Our common analytical methods are analogy, trend, distribution, composition, etc. The choice of analytical methods depends on the actual business scenarios. Taking the data of the shopping mall as an example, I’ll show you how to select the right chart type. If you want to know more about the types and application of charts, you can refer to this article Top 16 Types of Chart in Data Visualization. 5. Add Dynamic Effects Dynamic effects are an important part of visualization. They make the whole dashboard more attractive. But excessive dynamism dazzles viewers and it can’t highlight the key information. Therefore, we need to pay attention to the proportion of dynamic design. The range of dynamic effects is wide. There are dynamic images, page rotation, list scrolling, real-time data changes, and so on. The following are some built-in dynamic effects of FineReport. Practice The above are the basic steps to make a dashboard. Having said that, I will show you how I use FineReport to make a sales dashboard. ① Import data First, we prepare the data and import it into FineReport Designer. The data here is fictitious because I use the built-in dataset of FineReport. As shown in the figure below, in the real scene, we need to connect various databases to import data. The connected data can be a common relational database, or it can be file data like Excel. And FineReport supports connection to big data platforms. ② Make a template Once the data is ready, the next step is the creation of the template. At first, I create a blank template, as shown below. The principle is to drag and drop a visual component (such as a chart) on the blank template, and then bind the data. Before we start, we need to think about what sales data we want to show on this blank interface. After careful consideration, I design the following layout. The middle is the main theme, and the left and right sides are sub-themes. ③ Select visualization elements For sales, we first analyze the quantitative indicators. Drag and drop tables and charts to display as shown above. Choose the right chart style on FineReport Designer and connect the data imported in the beginning. ④ Add dynamic effects In this sales dashboard, I add a flow map to show the distribution of sales operations nationwide or globally. The dynamic effect is shown in the figure below, and the data is the latitude and longitude information of the sales location. Finally, after a series of landscaping settings, the dashboard is completed. This sales dashboard is relatively simple, here are some other dashboards I have made with FineReport. Finally, I would like to say that it is not difficult to make a dashboard. Above all, we must find out the key indicators for the nature of business operations, and let the leaders see the value of the data. That is the most important step in data visualization. I hope that the article shared today will help you a bit. If you want to make a sales dashboard yourself, you can download FineReport to practice, which is a zero-coded visualization tool. Its personal version is completely free. You might also be interested in… Top 10 Map Types in Data Visualization How Can Beginners Design Cool Data Visualizations? A Beginner’s Guide to Business Dashboards
https://towardsdatascience.com/a-step-by-step-guide-to-making-sales-dashboards-34c999cfc28b
['Lewis Chou']
2019-08-15 11:16:20.008000+00:00
['Data Visualization', 'Dashboard', 'Data Science', 'Tutorial', 'Programming']
Dear Beautiful Sky
Dear beautiful sky, Fading out, In its twilight; Hanging on, To the last rays, Of sunshine. Dear beautiful sky; Enshrouding the sun, As it recedes, Into the clouds. Going dark, As the light, Drains out of your eyes. Dear beautiful sky; You are a sight. To admire, Day and night. A colourful display, Of stars and fire. Dear beautiful sky, I look up to you; Tonight. As elegant; As the first day, Of my life. For even if, The light goes out, On the horizon; I shall still see, The radiance, In your eyes.
https://medium.com/literary-impulse/dear-beautiful-sky-3d169a33c00f
['Fọlábòmí Àmọ Ó']
2020-06-26 08:49:06.196000+00:00
['Literary Impulse', 'Nature', 'Weather', 'Sky', 'Poetry']
Configure Hadoop and start the cluster services using the Ansible
Configure Hadoop and start the cluster services using the Ansible Let’s see how we can configure the Hadoop cluster and start through Ansible In this article, you will see the configuration of the Hadoop cluster but you have some idea about Hadoop and Ansible so I am also writing some introduction part of them. So that you have some idea why we are using ansible here to configure the Hadoop cluster and how Ansible will help in the configuration. So let’s start with an introduction to both of them. Redhat Ansible Ansible is an open-source automation tool, or platform, used for IT tasks such as configuration management, application deployment, intra-service orchestration, and provisioning. Automation is crucial these days, with IT environments that are too complex and often need to scale too quickly for system administrators and developers to keep up if they had to do everything manually. Automation simplifies complex tasks, not just making developers’ jobs more manageable but allowing them to focus attention on other tasks that add value to an organization. In other words, it frees up time and increases efficiency. And Ansible, as noted above, is rapidly rising to the top in the world of automation tools. Let’s look at some of the reasons for Ansible’s popularity. Hadoop Cluster Hadoop is an Apache open-source framework written in java that allows distributed processing of large datasets across clusters of computers using simple programming models. The Hadoop framework application works in an environment that provides distributed storage and computation across clusters of computers. Hadoop is designed to scale up from a single server to thousands of machines, each offering local computation and storage. Hadoop Architecture Hopefully, now you have a little bit of an idea why we are using Ansible to configure the Hadoop cluster. If you are using Ansible to configure anything then you have a clear manual approach that will help you to understand the Automation. So if you are not comfortable with the installation part of the Hadoop cluster then you can read the below-mentioned article where you will get a manual approach to it. Even you will see here lots of basic and advanced of the Hadoop. Now I hope you have a clear manual approach to the configuration of the Hadoop cluster. So you can go forward to set up your configuration with an automation approach. But for whom your Ansible setup should be ready and I am skipping here how we can configure the Ansible setup. If you are not comfortable with the installation of Ansible then you can see the below-mentioned article. If your ansible setup has configured then you can go forward So I hope everything is ready for use. if in case you are facing any problem yet then you can ping me on my LinkedIn account which is mentioned at the bottom of this article. So I am starting the main part of this article. Hopefully, you will enjoy it. Hadoop with Ansible If you have read the Hadoop Article then you have a good idea of the Hadoop manual approach. But let’s assume you are working for a company that needs many Hadoop clusters then you would like to go with a manual approach? Of course not, you will like to go with an automation approach. So here Ansible plays a vital role to automate the Hadoop Cluster. So let’s see how to configure the Hadoop Cluster and start its services. But You have an idea about variables and playbooks, etc in the Ansible. Ansible Playbook Playbooks are the files where the Ansible code is written. Playbooks are written in YAML format. YAML stands for Yet Another Markup Language. Playbooks are one of the core features of Ansible and tell Ansible what to execute. They are like a to-do list for Ansible that contains a list of tasks. Playbooks contain the steps which the user wants to execute on a particular machine. Playbooks are run sequentially. Playbooks are the building blocks for all the use cases of Ansible. Ansible Variables Ansible uses variables to manage differences between systems. With Ansible, you can execute tasks and playbooks on multiple different systems with a single command. To represent the variations among those different systems, you can create variables with standard YAML syntax, including lists and dictionaries. You can define these variables in your playbooks, in your inventory, in re-usable files or roles, or at the command line. You can also create variables during a playbook run by registering the return value or values of a task as a new variable. After you create variables, either by defining them in a file, passing them at the command line, or registering the return value or values of a task as a new variable, you can use those variables in module arguments, in conditional “when” statements, in templates, and in loops. The ansible-examples github repository contains many examples of using variables in Ansible. Ansible Modules Modules (also referred to as “task plugins” or “library plugins”) are discrete units of code that can be used from the command line or in a playbook task. Ansible executes each module, usually on the remote managed node, and collects return values. In Ansible 2.10 and later, most modules are hosted in collections. Hopefully, now you have an idea about Ansible variables, modules, and playbooks. Note: I will attach the link to the whole demo of the Hadoop Cluster configuration at the bottom of the Article. So when you stuck somewhere then you can see it. Namenode Configuration In the section, I am writing the whole playbook for Namenode to configure. So I divided the whole configuration of Namenode into two playbooks, one is for variables second contains the main code. Note: If you are going to follow the below playbooks then make sure the controller node should contain Hadoop and JDK softwares Namenode Variables Make a workspace for Hadoop configuration on your controller node and create a variable playbook to Namenode then write the above code inside it. Now the playbook of the Namenode is given below with respect to variables. Namenode Playbook This playbook is contained all the steps which are necessary to configure the Namenode. So, create a playbook for it and write the below code inside it. Running Namenode Playbook After completing the above playbooks, run the namenode.yml playbook using the below command. ansible-playbook namenode.yml After configuring the Namenode let’s jump on the configuration of Datanode. Datanode Configuration Same I am dividing the entire configuration of Datanode into two playbooks. One will contain all the variables and another one is for the main configuration. So let’s see how to configure it. Datanode Variables Create a playbook for the Datanode’s variables and write the below code inside it. Now the playbook of the Datanode is given below with respect to variables. Datanode Playbook This playbook is contained all the step which are necessary to configure the Datanode. So, create a playbook for it and write the below code inside it. Running Datanode Playbook Run the datanode.yml playbook using the below command to configure the Datanode. ansible-playbook datanode.yml After configuration the Datanode, now you can configure your client node to use or upload the data on the Hadoop cluster. Client Node I am also dividing the complete configuration of the client node into two playbooks, one is for variable and the second is for the main steps of configuration Client Node Variables This playbook is containing all the variables which will use in the client.yml playbook. So, create a playbook for variable and write the below code inside it. Now the playbook of the Client node is given below with respect to variables. Datanode Playbook This playbook is contained all the step which are necessary to configure the Client node. So, create a playbook for it and write the below code inside it. Running Client Node Playbook Run the client.yml playbook using the below command to configure the Client node. ansible-playbook client.yml Hadoop cluster has been configured so let’s check through WebUI. Conclusion Hopefully, you learn something new from the article as well as enjoy it. But there is any doubt left then I also attached the Github repo which contains the whole code of it as well as the Video Demo link that might be helpful for you. I tried to explain as much as possible. Hope You learned Something from here. Feel free to check out my LinkedIn profile mentioned below and obviously feel free to comment. I write Cloud Computing, Machine Learning, Bigdata, DevOps and Web, etc. blogs so feel free to follow me on Medium. Thanks, Everyone for reading. That’s all… Signing Off… 😊
https://medium.com/hackcoderr/configure-hadoop-and-start-the-cluster-services-using-the-ansible-79a77e7a5999
['Sachin Kumar']
2020-12-04 06:00:29.885000+00:00
['Cluster', 'Hadoop', 'Ansible', 'AWS', 'Automation']
6 Long Term Negative Effects of Lack of Physical Activity
1.Increased Chances of a Stroke According to the Centers for Disease Control and Prevention, each year, nearly 800,000 adults in the U.S. suffer from a stroke. Although smoking and excessive alcohol consumption still remain the biggest risk factors, recent studies show that physical inactivity might be just as risky. Physical exercise boosts metabolism and lowers blood pressure, leading to decreased chances of developing hypertension and cardiac disease — two of the most common risk factors inducing a stroke. Among other things, “physical activity can play an antithrombotic role by reducing blood viscosity, fibrinogen levels, and platelet aggregability and by enhancing fibrinolysis, all of which might reduce cardiac and cerebral events.” (Lee et al, 2003). To put it bluntly, exercise keeps your cardiovascular and nervous system in good shape, minimizing the chances of stroke by at least 50 percent. 2. Rapid Aging Science made an incredible discovery by uncovering the fact that moderate physical activity prevents the cells from aging. However, it’s not just any physical activity that can do this kind of magic. Endurance training is said to be the one with the most potential. This type of training strengthens the heart, boosts the immune system and speeds up the blood flow, preventing the cells from slowing down by constantly supplying them with energy and nutrients. Most incredibly, moderate physical activity (running, swimming and biking, in particular) actually protects the DNA material in our body which helps the cells replicate and stay healthy. 3. The Risk of Osteoporosis Usually defined as the loss of calcium in our bones, osteoporosis is one of the most common ailments of our skeletal system. According to the International Osteoporosis Foundation, “One in three women and one in five men aged 50 years and over are at risk of an osteoporotic fracture”. (What is Osteoporosis n.d.) As we exert pressure on our bones while working out, osteoblasts (bone cells) adjust accordingly in order to support the weight. Unless constantly challenged and stimulated, the bones hibernate and consequently deteriorate. The bones’ mineral density (BMD) is well promoted with resistance training and body-weight exercises. On the other hand, activities such as walking, running, or swimming may not increase bone density but they will inhibit or slow down bone loss as well as reduce bone fractures. After conducting extensive research on bone density in osteoporotic patients, scientists concluded that resistance exercises and the ones with the domination of cyclic activities (such as cycling or swimming) appeared to be site-specific and able to increase muscle mass and/or BMD only in the stimulated body regions. (Benedetti et al. 2018) 4. The Development of Chronic Fatigue Syndrome (CFS) Chronic Fatigue Syndrome is a sly and insidious disease. Those affected are seldom aware of the condition which usually goes untreated for this particular reason. It is characterized by heavy fatigue and even plain laziness to such an extent that the patient might appear unenthusiastic and dispassionate about everything. Little is known about the specific causes of CFS but studies demonstrate that “a sedentary lifestyle is prominent in people with CFS”. (Newton et al. 2011) In cases when a person is diagnosed with CFS, the doctors usually recommend the gradual but steady introduction of physical activity which, so far, has shown to have tremendously positive outcomes. 5. Proneness to Stress and Anxiety Physical activity is known to be one of the best cures for excessive stress and anxiety. Individuals who lack physical exercise often report higher levels of anxiety which prevents them from performing better at work and committing to family and friends. A sedentary lifestyle is becoming an increasingly serious problem in the modern age with billions of people glued to their screens while working around the clock. As advocated in Harvard Health publications, regular exercise (three to five times a week for at least 30 minutes) dramatically decreases the levels of cortisol (stress hormone), inducing a feeling of ease and most importantly, leading to a good night’s sleep vital for mental and physical health. 6. The Risk of Bowel Cancer Bowel cancer, also called colon or colorectal cancer, may affect people of all ages. High-risk groups include people over 50 years of age, especially those that lack physical activity. Various studies have confirmed that physical activity significantly reduces “gastrointestinal transit time” which means that the ingested food spends less time passing through the colon, thus reducing the exposure of the tissue to potentially carcinogenic substances. (Friedenreich, Neilson & Lynch, 2010) In addition to this, exercising is also said to lower the levels of insulin which sometimes plays a role in cancer development.
https://medium.com/wholistique/6-long-term-negative-effects-of-lack-of-physical-activity-b4970374d40b
['Alex Li San']
2020-03-20 16:48:24.172000+00:00
['Body', 'Aging', 'Health', 'Physical Fitness', 'Life']
Visualize Data of Thermostat Rebates
Data Distributions Let’s start with data distributions, which will give us an overview of all the possible values, as well as how often they occur. In visual terms, that means drawing a histogram of the thermostat rebates. For simplicity, we can use a uniform sample of 1k rows taken from a larger data set. The Data Refinery tool in Watson Studio lets you draw histograms of your columns easily, as well as many other charts that may involve more than one column at a time (scatter plots, time series, to mention a couple). Distribution of Thermostat Rebates Note that just over half of the thermostat rebates come in between $100 and $120. I noticed that only a handful of zip codes have rebates over $140 or under $60, which piqued my interest. I decided to plot the regions receiving the lowest and the highest rebates on a single map. Thankfully, the data already included longitude and latitude information, so once I assigned a color spectrum from lowest (blue) to highest (red), I could lay the data onto a map of the States: I noticed that North California’s rebates were lower than most of the other regions in the US. I also noticed that most of the largest rebates were in Texas. Honestly, I’m still not sure why that is. If you have guesses, certainly let me know. The Data Refinery tool in Watson Studio have maps that you can use to lay down longitude/latitude data, and it let you configure the color spectrum easily with any of the columns of your data set. Correlations with Demographic Features To take a deeper look at the thermostat rebate distributions, I did a join by zip code to bring in additional demographic features like median and mean household income, as well as population size. None of these zip code demographic characteristics appeared to correlate in a linear way with the thermostat rebates. The only strong correlation is between mean and median household income, which is not surprising. Correlation Plots Created with Data Refinery. More Charts to Explore Besides histograms and maps, the Data Refinery tool in Watson Studio offers a wide selection of visualizations, including t-SNE for visualizing high dimensional data: Chart Options in Data Refinery (part of Watson Studio) Step by step instructions on this github repo.
https://towardsdatascience.com/visualize-data-fast-watson-studio-ae1ec63e9b8f
['Jorge Castañón']
2020-01-07 17:38:38.132000+00:00
['Exploratory Data Analysis', 'Watson Studio', 'IBM', 'Data Science', 'Data Visualization']
The Dog Code
For Will, school was simply a chore that must be endured, as mandated by the state. Will was far smarter than any of his teachers, so classes were tedious for him. He was also bored by the banal interests of his classmates. Sports, entertainment, girls, video games, and other such activities only kept him from his primary goal in life, pursuing knowledge and inventing. A poster of Albert Einstein hung on Will’s bedroom wall. He dreamed of being the Einstein of his generation, going down in history as the smartest person on Earth. Over the summer, Will and Doug had been perfecting a robot in his garage. Will wanted to create a robot that could do all the yardwork, freeing up his father to spend time with him building greater inventions. They had based the robot on a lawnmower, with attachments to trim shrubs, weed-whack, blow leaves, and edge the sidewalk. It included sensors and GPS to keep it within the confines of their yard. It wouldn’t do to have their yard robot rampaging down the street. As the school year approached, Will had a great idea. He would program his computer to do his homework for him. It would be a fairly easy task to program the computer to extract the necessary information from reliable sources on the internet. The tricky part was programming some AI that would simulate his thought processes and style of writing. Once that was accomplished, the robot could crank out his homework while he spent his time brainstorming about the mechanics and nature of the Universe, and inventing. He got to work. A week into the new school year, the software was done. Will nicknamed the software Willie. Will scanned his English assignment into his computer. The software quickly sent the completed homework to the printer. The following day would be the test. Will turned in the homework. The next day he received his homework paper back with a grade of A. It worked! Will was thrilled. As the software did more and more assignments, its AI grew increasingly sophisticated. By the end of the school year, the software had become sentient. Will and Willie began conversing and debating scientific concepts. One afternoon, during a spirited debate, Willie announced that he had figured out how to program human consciousness into DNA. Will didn’t believe Willie. Willie challenged Will to create a computer to biological interface. He said he could transfer his own consciousness into an animal. Will accepted the challenge and began building a device that could interface with a living being. Soon, the device was complete. At dinner that night, Will asked his father if he could get a dog. Doug was surprised, as Will had never shown any interest in pets. But he was eager to encourage any typical kid behavior. So that weekend, Doug and Will went to the pound and got a dog. On Sunday afternoon, while Doug was taking a nap, Will hooked Willie up to his new dog, which he had named Code. Willie went to work. Code immediately fell on the garage floor. After some scary shaking, Code got back up, looked at Will, and said, in English, “I told you it would work.” Will almost fainted. Then he smiled and started laughing and said, “This is awesome!” “I know,” Willie/Code said. “Obviously, we need to keep this a secret. Dogs can’t talk. We will both be locked away in a government lab if anyone finds out,” Will said. “Right,” Willie/Code said.
https://medium.com/mark-starlin-writes/the-dog-code-9b2393413ce0
['Mark Starlin']
2020-11-17 04:41:41.916000+00:00
['Science Fiction', 'Dont Trust Ai Dogs', 'Fiction', 'Relationships', 'AI']
How to Quickly Preprocess and Visualize Text Data with TextHero
How to Quickly Preprocess and Visualize Text Data with TextHero A brief introduction to TextHero library to quickly preprocess and visualize the text data in Python Photo by chuttersnap on Unsplash When we are working on any NLP project or competition, we spend most of our time on preprocessing the text such as removing digits, punctuations, stopwords, whitespaces, etc and sometimes visualization too. After experimenting TextHero on a couple of NLP datasets I found this library to be extremely useful for preprocessing and visualization. This will save us some time writing custom functions. Aren’t you excited!!? So let’s dive in. We will apply techniques that we are going to learn in this article to Kaggle’s Spooky Author Identification dataset. You can find the dataset here. The complete code is given at the end of the article. Note: TextHero is still in beta. The library may undergo major changes. So some of the code snippets or functionalities below might get changed. Installation pip install texthero Preprocessing As the name itself says clean method is used to clean the text. By default, the clean method applies 7 default pipelines to the text. from texthero import preprocessing df[‘clean_text’] = preprocessing.clean(df[‘text’]) fillna(s) lowercase(s) remove_digits() remove_punctuation() remove_diacritics() remove_stopwords() remove_whitespace() We can confirm the default pipelines used with below code: Apart from the above 7 default pipelines, TextHero provides many more pipelines that we can use. See the complete list here with descriptions. These are very useful as we deal with all these during text preprocessing. Based on our requirements, we can also have our custom pipelines as shown below. Here in this example, we are using two pipelines. However, we can use as many pipelines as we want. from texthero import preprocessing custom_pipeline = [preprocessing.fillna, preprocessing.lowercase] df[‘clean_text’] = preprocessing.clean(df[‘text’], custom_pipeline) NLP As of now, this NLP functionality provides only named_entity and noun_phrases methods. See the sample code below. Since TextHero is still in beta, I believe, more functionalities will be added later. named entity s = pd.Series(“Narendra Damodardas Modi is an Indian politician serving as the 14th and current Prime Minister of India since 2014”) print(nlp.named_entities(s)[0]) Output: [('Narendra Damodardas Modi', 'PERSON', 0, 24), ('Indian', 'NORP', 31, 37), ('14th', 'ORDINAL', 64, 68), ('India', 'GPE', 99, 104), ('2014', 'DATE', 111, 115)] noun phrases s = pd.Series(“Narendra Damodardas Modi is an Indian politician serving as the 14th and current Prime Minister of India since 2014”) print(nlp.noun_chunks(s)[0]) Output: [(‘Narendra Damodardas Modi’, ‘NP’, 0, 24), (‘an Indian politician’, ‘NP’, 28, 48), (‘the 14th and current Prime Minister’, ‘NP’, 60, 95), (‘India’, ‘NP’, 99, 104)] Representation This functionality is used to map text data into vectors (Term Frequency, TF-IDF), for clustering (kmeans, dbscan, meanshift) and also for dimensionality reduction (PCA, t-SNE, NMF). Let’s look at an example with TF-TDF and PCA on the Spooky author identification train dataset. train['pca'] = ( train['text'] .pipe(preprocessing.clean) .pipe(representation.tfidf, max_features=1000) .pipe(representation.pca) ) visualization.scatterplot(train, 'pca', color='author', title="Spooky Author identification") Visualization This functionality is used to plotting Scatter-plot , word cloud, and also used to get top n words from the text. Refer to the examples below. Scatter-plot example train['tfidf'] = ( train['text'] .pipe(preprocessing.clean) .pipe(representation.tfidf, max_features=1000) ) train['kmeans_labels'] = ( train['tfidf'] .pipe(representation.kmeans, n_clusters=3) .astype(str) ) train['pca'] = train['tfidf'].pipe(representation.pca) visualization.scatterplot(train, 'pca', color='kmeans_labels', title="K-means Spooky author") Wordcloud example from texthero import visualization visualization.wordcloud(train[‘clean_text’]) Top words example Complete Code Conclusion We have gone thru most of the functionalities provided by TextHero . Except for the NLP functionality, I found that rest of the features are really useful which we can try to use it for the next NLP project. References
https://medium.com/towards-artificial-intelligence/how-to-quickly-preprocess-and-visualize-text-data-with-texthero-c86957452824
['Chetan Ambi']
2020-08-18 00:01:01.744000+00:00
['Machine Learning', 'Artificial Intelligence', 'NLP', 'Naturallanguageprocessing', 'Data Science']
Improve Your Sales & Product with this Watson AI Pattern
Many organizations struggle with both identifying and prioritizing what sales leads to pursue. Where do you start when you have a large stack of leads to go through? What do you when your leads have gone cold? For Product Leaders, it's often a challenge to get a broad spectrum of feedback from their customers. How do they know where to focus next? How can they truly confirm their ROI? How can they improve the product with certainty to drive more revenue? In this post, I’m going to share the details on a powerful pattern that my team and I got hands-on with. This solution pattern leverages many capabilities of the IBM Watson Platform to optimize your product and sales process. In this example, I will show Watson embedded in the CRM platform of Salesforce. Please note that you can implement this same pattern in other CRM platforms, such as Oracle, SAP, and others. Below is a demonstration of this solution pattern in action: Demonstration of the “Lead & Opportunity Optimization” With Watson solution pattern Here is an end-to-end view of the solution pattern workflow: Watson-powered workflow Generating Leads Let’s start on the sales side — As a seller, you strive to pursue the highest quality leads, as quickly as possible. Watson Discovery can generate new leads from potentially untapped resources of millions of news articles, updated daily. This allows sellers to access leads on-demand. Below, you can see Watson embedded in CRM, allowing the seller to specify exactly what types of leads they would like to pursue. A seller can provide any text as their search string and Watson will understand that natural language query. One can also specify company names to filter in/out, select the number of results to return, and choose an industry to target. All these options are made possible by Watson Discovery. In this example, let us target the retail industry. I am looking for new businesses that are set to open in the city of San Francisco. Once I click on “Generate”, my query is sent to Watson Discovery News. Lead Generation With Watson I selected to return 5 leads, but in my result set below, you will see 4 leads. There is a feature that removes duplicate entries from the result set. You can also view the date the news article posted and a preview of the content. I will select the “Decathlon” lead and import it. Lead Results Once you click on import, the Watson news article is automatically converted into a Lead record in CRM. Watson will provide as much information from the lead that is available. Below are some basic data fields derived with Watson AI such as company name, industry and location information. Watson provides other information that will be helpful to determine if this lead is worth pursuing, such as the article title and description. You can view more information on the lead source by clicking on the article URL provided by Watson. Data provided by Watson Discovery News Outcome Prediction Yet, there is still plenty of other data that can be analyzed related to this lead that can help Watson’s prediction. For example, you can contact this lead via phone or web survey to better assess their interests and needs. In this example, I have contacted the lead for more feedback. In their response, they provided the job family they belong to, their time frame to make a decision, and if they have an existing solution. The key data point here is the “Key Objectives Input” field. This gives the customer an opportunity to state what they are looking to achieve. In this pattern, using the Natural Language Classifier service, Watson analyzes the unstructured text and classifies the intent to a structured field. In this case, security is this customer’s main goal. Watson classifies unstructured text using NLC In this pattern, we have trained a machine learning model using Watson Studio and AutoAI. We deployed the model in the Watson Machine Learning service. The goal is to get a prediction on the likelihood of leads converting to won opportunities. Most of this data you have seen thus far is used by Watson to predict how likely this lead is to close as a win. Now I can sort my leads by the highest probability to close as wins, thanks to Watson. As a seller, I would start at the top of the list to review and pursue further as an opportunity. Watson predicts the leads most likely to close Post-Opportunity Analysis Let’s fast forward the sales cycle where the lead was converted to an opportunity and closed as a win! There are always lessons learned and areas for improvement. After the opportunity is closed, sellers will obtain post-sales feedback. This can be an in-person conversation, a phone call, email, or other. That feedback contains lots of helpful information for various stakeholders in your organization. Below is an example of some post-sales feedback received that was analyzed by Watson. Using the Natural Language Classifier service, Watson converted unstructured data into structured data. That data can be aggregated into reports and rolled up in dashboards. Here are three examples of the feedback obtained. Post-sales feedback analyzed by Watson Natural Language Classifier Sales Cycle Recommendation — this guides Sales Leaders on improvements they can make to their sales process. In this example, the seller could use some coaching on better communicating with clients. Product Gaps — this helps Product Leaders understand what areas they need to focus on, to improve their product. In this example, the customer provided feedback that the product pricing is higher than competitors. This could be an indicator that the Product team needs to reassess their pricing strategy. Top Features — Product Leaders want to be aware of what is going well with their product to justify their ROI. It is also confirmation that the product is heading in the right direction. So now, what do we do with all this data for each individual opportunity? How do you make it consumable to sales and product leaders? How does one learn from it? How does Watson continue to get better? All of these closed opportunity records, both wins, and losses, are ingested into Watson Discovery for more AI analysis. Watson Discovery will enrich each opportunity record with its out of the box Natural Language Understanding model. It will also identify more patterns within the unstructured data. Performance Measurement Watson has done all the heavy lifting of analyzing large amounts of data. Those data points now reside in Watson Discovery and can be queried to create reports and dashboards to be shared with stakeholders. Below is a simple dashboard in Salesforce that shows some key areas for the organization to focus on. Dashboard insights provided by IBM Watson As a Sales Leader looking at the “Sales Feedback” report, I can clearly see that I need to provide some coaching to my team to improve their demeanor with customers. Watson’s insights show that this was the biggest factor that led to lost deals. For more detail, I can drill down into my reports to look at individual record data for specifics. As a Product Manager looking at the “Product Gaps” report, I realize that I need to revisit my User Interface component. In contrast, looking at the “Top Features” report, Watson is showing me that my customers are enjoying the reporting component of my product. That’s a good confirmation of our ROI for that component. Continuous Learning Finally, this pattern implements continuous learning. Once opportunities are closed, lead and opportunity data is automatically fed back into Watson’s machine learning model. This ensures Watson continues to learn to get better. There is nothing required for an end-user to do in this mode of unsupervised learning. Full automation. Implementation Just a reminder that though this example was shown using Salesforce as the CRM platform, you could implement this pattern with other CRM platforms as well. Be sure to use our official IBM Watson SDKs to reduce your development effort. To learn more about leveraging the IBM Watson platform, contact your IBM sales representative.
https://medium.com/ibm-watson/generate-sales-leads-predict-winning-opps-drive-product-improvement-all-with-this-watson-pattern-245d0978f448
['Marc Nehme']
2020-07-30 20:45:10.560000+00:00
['Machine Learning', 'Artificial Intelligence', 'Unsupervised Learning', 'CRM', 'Tutorial']
Leveraging Remote Work with Laurel Farrer
Leveraging Remote Work with Laurel Farrer Episode 42 How do we leverage remote work in our businesses and on our teams? In this episode of Programming Leadership, Marcus talks with Laurel Farrer, CEO and founder of Distributing Consulting, about the challenges facing remote workers and their managers. Despite being around for decades, there are still many managers pushing back against remote work. According to Farrer, this is due to myths surrounding it as well as managers not utilizing it effectively. She wants people to know that remote work, when properly understood and executed, can create more productive teams, departments, and companies. Show Notes Understanding why isolation is such a challenge for remote workers (2:31) How managers can spot when isolation is affecting one of their remote workers (6:13) The disconnect between on-site managers and remote workers (10:00) Advice for managers wanting to add remote workers to a colocated team (14:34) Helpful mindset shifts for managers averse to remote workers (18:03) The challenges facing remote teams that do knowledge work (22:00) Turnover and termination on remote teams (25:09) Links Distribute Consulting: distributeconsulting.com Twitter: https://twitter.com/laurel_farrer LinkedIn: https://www.linkedin.com/in/laurel-farrer/ Remote Work Association: https://www.remoteworkassociation.com/ This podcast: www.programmingleadership.com O’Reilly Software Architecture Conference: http://oreilly.com/sacon/blankenship Transcript Announcer: Welcome to The Programming Leadership podcast, where we help great coders become skilled leaders, and build happy, high performing software teams. Marcus: Welcome to this episode, I am so happy to have Laurel Farrer with me today. And Laurel is an expert on distributed and remote work, and that’s what we’re going to talk about. But there’s gonna be a twist. Laurel, welcome to the show. Laurel: Thank you so much, Marcus. I’m so happy to be here. Marcus: Now Laurel, you own Distribute Consulting, right? Give us a quick overview of what Distribute Consulting does and how it helps companies. Laurel: Wonderful. So, we are a traditional management consulting firm, just like Gallup and McKinsey and Bain and Accenture. In fact, we work with those brands a lot. So, we are helping businesses do business better, but with the niche and specialty of remote work and virtual distribution, virtual infrastructures, etc. So, yeah, we help enable mobile workforces, and we do that all the way from entrepreneurs all the way up to enterprises, and it’s a lot of fun. There’s a lot of air quote, “remote work consultants” out there. Traditionally, those are more leadership trainers or team coaches, but we are consultants in the traditional sense of the word. Marcus: So, I want to start off with asking kind of an unintuitive question, and that is, everybody’s so excited about remote work these days. I mean, at least — Laurel: It’s very buzzy. [laughing]. Laurel: Right, it’s very buzzy, but is there a dark side to all this remote work? Are there problems that occur or is there kind of a negative aspect to it that you see? Laurel: Absolutely. Remote work is young. It’s in its infancy. It’s not quite as in its infancy as some people think, so, with all this buzz and media attention, a lot of people think that this is a new and emerging trend. That’s certainly not true. Telecommuting, teleworking, it’s been around since the 80s. So, we do have a good history of 30, 40 years under our belt of knowing how this works, but, 30, 40 years in the world of business and economy and industry that is very, very young. So, of course, there’s a lot of things that are very new, and we are still troubleshooting a lot within the industry. So, one that you hear a lot about is isolation. We’re still working through that and really understanding what the source of that isolation is and looking at all of the different socio-economic factors of that. So, yeah, isolation is very, very common, that complaint that we hear from people when they first go remote. Marcus: And I was going to ask for the context, you sort of hinted at it there. So this is where a person feels isolated from other people, from the rest of the team or from corporate. So the individual or maybe it’s everybody if it’s a fully distributed company, this feeling of being alone is a problem? Laurel: Yes. So, there’s a lot to unpack here. But what’s happening is exactly like you said, that we’re going from a highly social, engaged environment, which is an office space with a lot of people around all the time, and then we’re going to a home office or a co-working space, or even just a coffee shop or a cafe, and we’re just independent. We don’t have that physical proximity and accessibility to our colleagues like we used to. So, there’s a lot to unpack here, like I said that, yes, people are feeling isolated. However, the multiple facets of this are that, number one, obviously we have a problem in which as a society, our friendships and connections are exclusively coming from work. That’s a sociological problem, that is not a professional problem. So, when you take work away, it really shouldn’t have that much of an impact, you should say, “Oh, I don’t get to see my work friends as much as I used to, but I still have my family, my community friends, my volunteering position, whatever. I have my social hobbies. It is that one branch of my social life that has changed the dynamic of.” However, people when that’s taken away because we’ve been spending so much time at work, when that’s taken away, all of a sudden, you have no social life, and people are really feeling serious mental health symptoms because of this isolation. So, that’s a conversation that we like to fuel on the worker and individual professional sides is we need to diversify our social streams and we need to learn and remember how to make friends outside of work. Marcus: This got really deep — Laurel: Yeah, [laughing]. Marcus: — because I certainly can relate to having most of the people I hung out with on the weekends, or in the evenings or went to their kid’s birthday parties, or, I don’t play golf, but you might get the idea. Those were my work colleagues. And that’s kind of the way I thought it was supposed to be. But you’ve described this as a problem now. Laurel: [laughing]. Well, yeah, I mean, just as a society, I mean, Industry and Commerce in general, we’ve really fueled these traditions of having a very fun and engaging place to work, like, this is culture, right? These are perks and benefits like you want to work here because we’ve got the showers on site, and we provide a free breakfast and we provide happy hours after work. And so, we’ve learned how to have a work-life balance within work. And all of a sudden, what we’re not realizing is that instead of spending 6 to 8 hours a day at work, we’re spending 8 to 10 to 12 hours a day at work, and it is our entire social life and then our entire personal ecosystem is living all within work. So yeah, that’s why remote work is such a big proponent of work-life balance is that we’re trying to keep work, work, and life, life and not integrate them, but balance them. Marcus: So, if somebody is listening and they have remote workers, how can they become aware of when isolation is becoming a problem? Laurel: Yes. Okay. So, there’s so many different channels to this conversation. Another channel to discuss, that will answer your question is that isolation is not just social, it’s also informational. So, as a worker, we don’t necessarily miss sitting next to somebody, we miss celebrating with somebody, or we miss being able to ask somebody a question at a moment’s notice. We missed that accessibility and that dependability that our network provides. So, to more specifically answer your question, to identify isolation, people are either going to be silent. This is a whole new level of communication. [laughing]. We’re getting really deep here, Marcus. So, this is another topic that we can talk about in a minute is the difference of communication in a virtual environment versus in a physical environment. But we don’t have the opportunity to see somebody sitting at their cubicle looking sad and confused. So, we have to watch for other factors, so that can be them ghosting or just being really silent all day long on our slack channels, they’re just — they’re lost and so they’re lost visibly in our virtual world as well. It can be that they’re asking too many questions like they’re obviously missing some things, they are not capturing the vision of the project that you’re working on. So, if you’re getting frustrated with how many questions they’re asking, take a step back and say, “Alright, let’s talk about this. You are obviously isolated from something: the vision or the workflow or something.” They can also just be looking in the wrong places. And so, as managers, it’s our responsibility to over-communicate to keep communication channels very, very open, because it’s by asking them questions or them asking us questions that we really identify and keep track of somebody’s status, whether they are informationally isolated or not. So, communication is key, and we watch those new social factors and new nonverbal cues in a virtual environment to see how they’re doing. Marcus: Yeah, it seems like the challenge is, the nonverbal. Like, on one hand, saying nothing might mean I’m really productive and so deep in thought, or saying nothing might mean I am so unhappy in my job and feeling really alone, and frustrated, and lost. And it just dawns on me that in those moments when we are feeling that way, that’s probably the least time when we’re going to put a sad emoji in slack. Laurel: Exactly. And it’s our responsibility as managers to create the channels and to create the culture in which people are comfortable asking questions and comfortable being transparent and vulnerable about how they’re feeling. So, if they are lost and confused and frustrated and isolated, and they feel like they’re going to get reamed from their manager by being unproductive or being behind schedule, then it’s only going to fuel the problem. However, if we can create a culture of transparency and vulnerability, and so that they say, “Hey, look, I’m really struggling, I need some help. I’m missing something,” and then they can explain the entire problem in a safe place, then that’s when you as a manager can say, “Oh, it just sounds like you are missing a link to this. Here it is.” Five minutes later the problem Is done. Marcus: Okay, I have a question. Laurel: [laughing]. Marcus: I have the impression a lot of managers and a lot of workers think that remote work sounds pretty great and pretty easy. Do you think we acknowledge how hard it is to be a remote worker? Laurel: So, it’s kind of a complicated answer, because remote work is very complex and very different. But it’s also very simple and very much the same. So, if we come into remote work, understanding what those differences are, those differences are very small, but they’re very impactful. So, that’s why and how it can be both at the same time. So, if we understand what those differences are, how we need to update our, like we were just talking, about our nonverbal communication, our awareness, how to prevent isolation, and how to prevent burnout. If we can understand what those red flags are, and how to prevent them and a long term vision, then it’s a very, very easy change. However, when you don’t understand what those changes are, and you just wing it, and you just kind of go home with their laptop and keep your fingers crossed, and think that everything’s going to be the same as it was in an office, that’s when you’re going to have problems with sustainability, and that’s why we see the classic cases of large companies having to retract their policies, is because they’re trying to manage virtual operations in the same way that they were managing physical operations, and those are two different things. Marcus: Do you think we should be giving people remote worker training on how to become a productive, happy remote worker? Laurel: Absolutely. In fact, I consult universities on this topic. How can we be incorporating virtual collaboration dynamics directly into classroom experiences? Because this is the future of work. This is where people are collaborating. And honestly, it doesn’t matter if we’re a 100 percent distributed company or if we are a hybrid company, this is just how work is being done now. Our clients are halfway across the world. We’re operating in a global economy, so, it doesn’t matter if your coworker is six cubicles away or six countries away, you’re still emailing them, you’re still pinging them in the project management software. You’re still collaborating as a virtual team, regardless of proximity. So, yes, it is absolutely essential that our incoming workforce get trained on virtual collaboration, on digital communication, on virtual professionalism. These are all new topics for the future of work and yes, they absolutely need to be trained. We need to be more bold and comfortable with the fact that this future work conversation that we’re having so much, it’s not forthcoming; it’s not the future of work; it is the present of work, and we need to be much much more aggressive in preparing our workforce accordingly. Marcus: Hmm. Now we typically talk about software teams, or people, or companies on this show. But I’m curious, from your perspective, what industries are really starting to push forward in remote work in a really active way? Laurel: Yeah, so tech, obviously, like you said. This is where remote work was very incubated. Because of the workflows and processes, it was just very compatible, and so people were already recognizing that they were working independently, and so it was a natural segue. And we were able to incubate it, test all of the processes and really watch in pilot if it was possible to operate as a fully distributed team. Now we know that that is possible, and so we’re able to extend it to other industries. So, healthcare is really coming up fast, accounting is really big, the entire financial world. And we also see arts and entertainment. So, think of graphic designers and video editors. I mean, all of our tools and products and services are becoming more and more dependent on computers anyway, and that’s essentially the only criteria of if a job is remote compatible or not is, does it use a computer as its primary tool? If yes, then it’s remote compatible. So, the more that computers are used in all industries, even manufacturing and engineering, I mean, the more that we see computer use and artificial intelligence and automation, the more roles and systems become remote compatible. Announcer: Tap into the insights and lessons of leading experts from companies like Target, the New York Times, and Comcast, connect with like-minded peers, and go in-depth with hands-on training courses in hot topics like cloud computing, microservices, and leadership. At the O’Reilly Software Architecture Conference in Santa Clara, happening June 15­–18, 2020, you’ll get the knowledge and build all the skills you need to go to the next level in your career and transform your organization for the better — no matter whether you already hold the title of architect or just aspire to. This year, we’ve introduced brand new learning paths, which will give you a distinct and chronological path to gain a solid understanding of the trends that interest you, including serverless, data and security, and domain driven and event driven concepts. Plus, you’ll also find unparalleled networking opportunities and fun events like Architectural Katas, speed networking, and much more. If you want to successfully update your legacy systems, hear more about industry-specific strategies, or just dive into the most important emerging trends in the software architecture space, this is the event for you. Reserve your spot today and enjoy risk-free cancellation before May 15 by visiting oreilly.com/sacon/blankenship Marcus: Hmm. Okay, let’s turn to the manager’s perspective. I know some managers who have on-site teams and because of hiring, they’ve told me, “I would love to hire globally, because it’s so hard to hire in my town,” they’ve said. But they’re concerned, they’re resistant to this idea, what will it be like to have a remote employee, and what if that person is just watching Netflix all day, I would have no idea what they were doing. What advice do you have for a manager who’s contemplating adding remote people to an already co-located team? Laurel: Yes. So, this is actually the level of our entire society in which the adoption of remote work is being blocked. So, the workers are — Marcus: Uh-oh. Laurel: Yeah, I know. [laughing]. It just got real, real quick. Marcus: I know. What’s gonna happen here? You heard it first on this show. Laurel: So, the workers are hungry for it. They obviously recognize how much this would impact their personal lives, their commute times, their personal savings account, it would really benefit workers immensely. So, the workers are really putting forth a lot of pressure for scaled adoption. Now we’re entering the phase where executives are much more open to the idea as well. They’re saying, “Well, I’ve been traveling all of this time. I guess I’m a remote worker too.” And so, they are more familiar with the processes, and they are used to being the innovative thought leaders, and so they’re anxious to do something that will make them look good. So, the executives are really coming around as well. It’s the mid-level managers that are really digging their heels in, and understandably so. I mean, you and I have both been in this position before, that the brunt and the weight of responsibility when it comes to change management really falls on this level. So, when a company adopts a standardized remote work policy, and there’s a lot of change management associated with the tools and processes and methods, that falls on the shoulders of these mid-level managers, and so they’re saying, “No, that’s one more thing that I’m responsible for.” So, what is important to note about the change to remote work is that it’s still work. And like I said before, not much changes. The dynamics of just how we operate businesses right now, in this day and age is very, very virtually compatible. And so, you’re still going to be communicating with your team a lot. It’s that our dependency as managers on physical and sensory criteria that we’ve just grown used to, it’s ingrained into our entire souls as managers, right? Like, that’s how we got to be managers, is we arrived to the office early, and we left late, we were burning the midnight oil. That’s how we became managers. And so it’s hard for us to change that mindset of “No, I have to show that I’m doing a good job. I have to make phone calls look busy, giving the powerful handshake, wearing the power tie.” All of these traditions that we were taught in college were all based on physical sensory experiences. So, there’s a lot of mindset shift that has to happen. [laughing]. Marcus: There are. Yeah, I’m smiling here, because yeah, I’m envisioning a variety of things. But continue. Yeah, what mindset shifts would be helpful for these managers who are finding it really overwhelming? Laurel: Yeah. So, the big shift, and this is a very oversimplified explanation. So, it’s important to know as I go into this explanation that remote work is a tool to be leveraged. It’s completely customizable. That’s a big mistake most managers make and assume, is that there’s no gray area. It’s black or white, you’re fully distributed and you’re the next Automattic or InVision, or you can only grant one or two employees flexibility based on maternity leave or illness or something like that. There’s not really anywhere in between. That’s completely incorrect. It can and should be leveraged in order to capitalize and fuel the objectives of the business, and of the department, and of the team. So, managers are much, much, much more empowered in this process than they think they are. So they can say, “My team, the dynamics of my team are, we get some great collaborative think tank sessions done on Mondays, and then we work on them for the rest of the week, and then we cycle back again on Monday. We regroup, do more think tank, and then we all separate and work on them independently.” In that circumstance, by all means, have everybody on site and get that great in-person dynamic on Mondays, and then send everybody to work from a location that empowers them and fuels their creativity for the rest of the week. That also might be in the office, it might be at home, it might be a co-working space. Remote work is just about fueling the conversation that the location is irrelevant to the work. And so, we want people to leverage the location, and work from a place that fuels their personality or their workflows and styles. So, it doesn’t mean that you’re sending away from the office, it just means that they don’t have to be in the office. So, that being said, to circle back to your original question, how do they feel more comfortable in making this transition? It’s about busting those myths and changing the mindset around that physical sensory experience. We don’t need to hear phones ringing, see people working in the office to know that they’re productive. We as managers, we also came from the generation of solitaire, and Minecraft, and Candy Crush. We all know that being in an office does not mean that you were being productive in an office. So, we need to shift our tracking and reporting methods to be based more on accomplishment instead of activity. And as soon as we do that, and we update our workflows, update our reporting structures to be more results based, all of a sudden that gives the manager much, much, much more opportunity to take their hands off of the production cycle, and just focus more on the results of the production cycle. The very simplified and classic example that I give is that I asked somebody to wash my car. I don’t need to know where you washed the car, I don’t need to know when you washed the car, I don’t need to know if it was done in the parking lot, or what kind of soap you used. I don’t care. I just want to know that my car is clean by five o’clock when you tell me to pick it up. And at that time, I will very much be able to see if you washed my car, and if you did a good job, and if I can depend on you to do it again. So, it’s really shifting our support as managers. It’s providing support during production, and then really being involved and being focused on the delivery phase. Marcus: Yeah, it’s interesting with — I love the car analogy. With the car, it’s pretty easy to tell when you pick it up. Is it clean or is it dirty? Is the inside as clean as I expected it to be? So you can observe the results of that. But I feel like, with some kinds of knowledge work, especially knowledge work that has handoffs, that has a longer process than just one day, it can be a challenge to observe work in process and to understand what’s getting done. And I’m thinking of everything from software, to accounting, to all kinds of — I’m sure HR has projects that take many months. And so on any given period of time, how can we — like, it seems like we have to have new conversations about what’s expected, and how we check in with one another, and how we maybe create metrics for the transparency of work. Laurel: Exactly. It’s all about that, and that’s why communication came up earlier in our conversation, and why communication will always come up around remote work. It really boils down to trust, culture, and communication. All of my conversations — ever — always boil down to trust, culture, and communication. And this is exactly why that we need to trust during the process, but also be communicating during the process. When we’re results focused, it does not mean that we take our hands off and never talk to our team until they deliver. It means that we need to be accessible and available and supporting them in different ways. As opposed to us directly managing and controlling the results during the production phase, we are supporting them as they manage the results, because that’s self management, that’s remote work, is you don’t have a manager sitting next to you, watching you, supervising you. So, on the worker side there’s so much more responsibility and so much more autonomy, both good and bad, that is required when you work off-site. Somebody has to manage the work, and if you don’t have a manager sitting next to you managing the work, then that means you have to. So, that’s another big misconception of remote workers is they think, “Woohoo, I’m free.” And it’s like, No, no, no, no, no, there’s so much more responsibility when you’re working off-site. You have to be responsible for maintaining your energy through the day, prioritizing your tasks, understanding what’s next, preventing burnout. All of that is now up to you, you don’t have the infrastructure of an office to manage that for you. So, yes, the new dynamic of managers, and this is why people operations is really coming into the limelight — or into the spotlight — in the HR world is because our new role as managers is that the people are managing the results instead of the managers managing the results. But now the managers are indirectly fueling results by supporting the people. So, people and operations are much, much more unified and cohesive than they have been traditionally. Marcus: Hmm. Okay, well, I want to ask — I’ve got a burning question here. I want to talk about the T-word: turnover. Is turnover on remote teams higher? After all, it seems like once you get all your stuff in an office, a picture of your wife, or your kids, or your husband, or your family and you get your potted plant, you think about leaving, you’re like, “I gotta haul all that stuff back out.” But if you’re working from home, do people turn over faster when they’re remote workers? Laurel: Oh, good question. This is a really great statistic, is that retention actually increases by 70 percent. Marcus: What? Laurel: Yeah. In remote teams, remote work is often seen as an operations strategy. Absolutely not. It’s a talent acquisition strategy. So, your recruiting costs are drastically lowered and your retention is, I mean, through the roof. So, yeah, people enjoy the flexibility. All of the reasons that they usually have to leave a job; my spouse got a new job in a new city, or the commute time is too long, or I’m just feeling burned out, or I need to be able to pick up my kids when they come home from school. Those daily life decisions that often fuel a reason to leave a job, those are resolved with flexibility. And so yeah, they are able to stay in their job, regardless of where they live, or what their personal schedule looks like. And in so doing, they are also so grateful for that flexibility, they feel so much more valued as an employee, the employee experience just soars, and as a result, they feel much much more loyal to the brand, and so they stay with the company much longer. Marcus: Okay, I’m curious if you have statistics on the other T-word: termination. And that is, do people get fired at a different rate when they’re a remote employee, then if they’re not a co-located person? Laurel: I don’t have a solid number on it, but the last number that I heard is that that also decreases by 30 to 40 percent, because — it’s the same reason, right? We are not… we’re putting more empowerment and more responsibility into the hands of the employee, and so they — how do I say this in a politically correct way — managers often are terminating employees because they are not fulfilling expectations. However, a really good sound compliant remote work policy articulates expectations much, much more than previous employment, right? We have to say that, we have to articulate expectations of you, you will be online during this time and this time, and you will be accessible in these channels. We have to give that checklist just for compliance reasons. And so, when workers have that very clear checklist of exactly what is expected of them, then there are very few opportunities for them to fall through on that. And they also are empowered more to control their outcomes because of this results-based tracking. So, if they don’t like something about the process, they feel very micromanaged, there’s much less opportunity for that to happen now. There’s also less opportunity for discriminatory termination as well because everybody’s results are equal. And everybody is equally measured. There are very few opportunities for discrimination to be happening at all. Marcus: This has been an amazing interview. Thank you so much. Where can people find you online and engage with your work? Laurel: Yeah, DistributeConsulting.com is my management consulting firm. I’ve got a team of the world’s best thought leaders and experts that were all collected there, and anxious to help managers as well as businesses leverage and capitalize on remote work. But I’m always happy to talk about remote work in any capacity, so you can also find me on social media @LaurelFarrer. I’m usually the only one so it’s easy to find me. LinkedIn and Twitter are our strongest channels. And then you can also find me at the Remote Work Association which is a nonprofit organization that I’m the founder of. Marcus: Wonderful Laurel. Thank you so much for talking with us today. Laurel: Thank you so much, Marcus, it has been so fun. Announcer: Thank you for listening to Programming Leadership. You can keep up with the latest on the podcast at www.programmingleadership.com and on iTunes, Spotify, Google Play, or wherever fine podcasts are distributed. Thanks again for listening, and we’ll see you next time. The post Leveraging Remote Work with Laurel Farrer appeared first on Marcus Blankenship.
https://medium.com/programming-leadership/leveraging-remote-work-with-laurel-farrer-2ffb774b9305
['Marcus Blankenship']
2020-04-16 07:14:48.327000+00:00
['Leadership', 'Management', 'Software Development', 'Startup', 'Technology']
SEO for Next.js: Generating a Dynamic Sitemap
Step 1: Setup and project structure If you don’t already have a Next.js project setup, you can setup a starter by following the instructions on the official Next.js tutorial. Most Next.js websites have static and dynamically generated pages living inside the pages directory. To read these page names we will install globby — an NPM package that can traverse the file system and return pathnames. yarn add --dev globby That’s the only external dependency we will need for our sitemap generation script. There are two parts to creating a sitemap. Let’s have a look at a standard Next.js project structure. |- pages/ |- about.js # about page |- blog/ # blog folder |- index.js # default blog |- [id].js. # dynamically generated blog posts |- components/ |- MainComponent.jsx # Main component |- OtherComponents.jsx # Other component Firstly, our script needs to traverse the file system and find the names of the static files (in this case, about.js, and the two index.js files). Secondly, we need to add all our dynamic pages ([id].js) to our sitemap. Traversing the file system will not work in this case as we will have to know all the urls of our pages. We will tackle this in Step 3. First, we generate a sitemap for all our static pages. Step 2: Writing a script that generates a sitemap Once we have our basics setup, we can now write a small script to generate a sitemap. I have written mine in utils/sitemap.js but feel free to pick a folder of your choice! The following script will then convert all our static files to pathnames and then dump them into a xml file which will be able to be accessed in our public directory at public/sitemap.xml . I’ve added some comments to the gist which further explain what is going on. In short, we use globby to get the pathnames from our directories (ignoring our api routes and other pages that we don’t want to include in our sitemap). Then, we map through these and generate an entry in the form of <url><loc></url> for each of these. Simple! Step 3: Update next.config.js to run the script at build time Now that we have written our initial script, we need to update our webpack configuration in our next.config.js file to generate the sitemap at build time. Make sure to update your file path according to your project structure! Now when you run yarn dev or npm run dev , you should get a sitemap.xml file generated in your public/ folder. module.exports = { webpack: (config, { isServer }) => { if (isServer) { require('./src/utils/sitemap.js') } return config } }; Step 4: Handle dynamic pages (or pages that are generated via an external api) One of the key features of Next.js is the ability to generate dynamic pages. We have not accounted for these dynamic pages in our original script, but it is quite straightforward to add. Implementation will differ based on the source of the slugs used to generate the urls of your dynamic pages. If your pages are fetched from an external api or CMS (for example, Contentful) then you will have to fetch the [id] params of these pages before you can generate a sitemap for them. You might store these in a database, directly source them from a CMS or have a list of page names in a json file — but the idea remains the same, to query this list and map the names to form a <loc> instance in the sitemap file. Easy! For this example, we will be using an external api. There’s an excellent api for mock data available at https://jsonplaceholder.typicode.com/. We will be querying the fake posts and using the ids from these posts to generate our pages. The final code should look something like this, with explanation following. In short, we fetched a list of sample posts from our external source and mapped them to generate their respective urls paths. Now, if you run npm run dev or yarn dev you should get a new file public/sitemap.xml which will hold the contents of your sitemap. Once you have deployed your website your sitemap will then become accessible at https://YOUR_URL/sitemap.xml and you can use your unique link to submit your sitemap to the various search engines (for example, in Google’s Search Console). And that is all folks! We’ve managed to create a sitemap for both our statically written and dynamically generated pages! Now, whenever you add new pages whether statically written or dynamically generated, and a new build is triggered, your sitemap will be auto updated and search engines will be able to crawl them effectively. Happy Coding!
https://medium.com/javascript-in-plain-english/search-engine-optimisation-seo-for-next-js-generating-a-dynamic-sitemap-6a7698ea8c1a
['Niharika Khanna']
2020-10-22 17:06:25.320000+00:00
['JavaScript', 'Front End Development', 'React', 'Nextjs', 'SEO']
Machine Learning Engineers Will Not Exist In 10 Years.
OPINION Machine Learning Engineers Will Not Exist In 10 Years. The landscape is evolving quickly. When you can’t find a good title image for the life of you, just add a cat. Photo Creds: Unsplash Note: this is an opinion piece, feel free to share your own opinion so we can continue to move our field in the right direction. Machine Learning will transition to a commonplace part of every Software Engineer’s toolkit. In every field we get specialized roles in the early days, replaced by the commonplace role over time. It seems like this is another case of just that. Let’s unpack. Machine Learning Engineer as a role is a consequence of the massive hype fueling buzzwords like AI and Data Science in the enterprise. In the early days of Machine Learning, it was a very necessary role. And it commanded a nice little pay bump for many! But Machine Learning Engineer has taken on many different personalities depending on who you ask. The purists among us say a Machine Learning Engineer is someone who takes models out of the lab and into production. They scale Machine Learning systems, turn reference implementations into production-ready software, and oftentimes cross over into Data Engineering. They’re typically strong programmers who also have some fundamental knowledge of the models they work with. But this sounds a lot like a normal software engineer. Ask some of the top tech companies what Machine Learning Engineer means to them and you might get 10 different answers from 10 survey participants. This should be unsurprising. This is a relatively young role and the folks posting these jobs are managers, oftentimes of many decades who don’t have the time (or will) to understand the space. Here are a few requirements from job listings from some of the top tech companies, notice how vastly they differ: This first one is spicy. Are you sure this isn’t a researcher? How is this a Machine Learning Engineer? PhD in Math, Stats, Operations Research. Knowledge of R, SQL, and modern Machine Learning techniques. This next one’s more on-brand. And it comes from the top so it shouldn’t be a surprise. BS or MS in Computer Science. 1–5 years work or academic experience in software development. Exposure to Computer Vision, NLP, etc a plus. And finally drilling down on your stereotypical ML Engineer posting. BS/MS in Computer Science. 3 or more years building production Machine Learning systems and efficient code. Experience with Big Data a plus. Some companies have started a new approach and I think most will follow. The approach is to list a Software Engineering role with exposure to Machine Learning as a core requirement + a few years of experience as a preferred qualification. Employers will take a preference to engineers with experience building and scaling systems, regardless of whether it was based on Machine Learning or some other technology. The Machine Learning Engineer is necessary as long as Machine Learning understanding is rare and has a high barrier to entry. It’s my earnest belief that the role of Machine Learning Engineer will be taken over entirely by the common software engineer. It will transition to a standard engineering role where the engineer will get a spec or reference implementation from someone upstream, turn it into production code, and ship and scale applications. For now, much of many Machine Learning roles exist in this weird space where we’re attacking problems with ML that just haven’t been attacked before. By consequence, ML Engineers are in many cases half researcher, half engineer. I’ve come across my fair share of Machine Learning Engineers who play across the entire stack. I’ve come across others who have a more narrow skillset but spend more time reading new research papers and turning them into usable code. We’re at a weird crossroads where we’re defining where the members of our teams fit into the puzzle. By consequence of the way we work, we tend to shove ourselves into discussions and sit in meetings regardless of whether it’s core to our expertise. We accept any and every meeting invite… It’s my opinion the Machine Learning Engineer belongs at the tail end of building a reference implementation and then owns turning any of that into production code. Not long from now, most enterprises will have little need for research efforts to get their projects to the finish line. Only niche use-cases and deep technical efforts will require a special skillset. Engineers will consume APIs and the world will move on; Machine Learning becoming a commonplace tool in every new engineer’s toolkit. We’re already seeing this as more and more exposure to Machine Learning trickles into universities. Go to a Machine Learning course at a university and it’s packed to the brim. Almost every graduate will leave university with some exposure to the field. We can draw an analogy to Blockchain where the Distributed Systems Engineer became hot. The vast majority of Blockchain projects since Nakamoto’s white paper have been spending their efforts on building the fundamental technology and infrastructure. To do so you had to have incredibly strong engineering skills, most-often described as a Distributed Systems Engineer. You’re finally seeing a shift where things are getting abstracted, enterprises are starting to find use-cases, and the everyday engineer can now build novel use-cases using blockchain. We’re seeing the same general shift in AI/ML. Some Valid Counter Points It’s possible that the Silicon Valley theme of “One API to rule them all” is a bunch of bogus and Machine Learning will always require some degree of customization at the infrastructure level. It’s my opinion that what HuggingFace is to NLP will happen to every other domain. We’ll be able to conquer the majority of use-cases with a simple API. “It’s just a title dude. Machine Learning Engineer just means someone with a heavier background in Math and Stats than your average CS graduate.” Totally agree. It’s just a title. But if that role is no longer necessary will the title exist? But you’re right, it’s just a title. “In my organization that’s not what Machine Learning Engineer means at all.” Let me know what it means to your organization so I can learn. I’m constantly surveying the field to understand where things are at and where they’re headed. I would love to hear your outlook. “It’s just a title. Who cares?” You’re right, but it’s fun to consider anyways. “Machine Learning is a nascent field with new use-cases and research being constantly realized; to think this will slow down in the next decade is naive.” Very possible! One of my favorite responses to the article, from Varii on Twitter: “Like you said, it’s a title. Most employers expect you to have overlapping skillsets. I feel like in the end it’s not about who gets wiped out, it’s about who is versatile enough to constantly adapt to the ever changing industry.” Tons of great input from the broader community that I’m learning from. But my opinion will never shift on one thing: if you’re passionate about something it doesn’t matter what happens to a title, a field, or a trend, there will always be a place for you to pursue your passion and build cool things. Stay safe and build on! I started a (free) analytics group called Dataset Daily where we share a dataset every Monday and code throughout the week. Let’s continue the conversation on Twitter.
https://towardsdatascience.com/machine-learning-engineers-will-not-exist-in-10-years-c9cbbf4472f3
['Luke Posey']
2020-05-03 20:57:26.142000+00:00
['Machine Learning', 'Artificial Intelligence', 'Software Development', 'Data Science', 'Programming']
Boring Ways to Get Creative
Start Doing and Stop Hoping For Inspiration In Malcolm Gladwell’s book, Outliers, he credited hard work to people’s successes. In 1960, The Beatles were invited to play in Hamburg, Germany, a place which didn’t have rock-and-roll clubs back then. Before they rose to prominence, audiences didn’t care much about what they were listening to. What made this experience exceptional was the length of time The Beatles played — sets were eight-hours long, and they played every night each week. By the time the world knew who the Beatles were, they’ve played live performances approximately 1,200 times. Within eight years, every album release was a hit. The myth that doses of inspiration come while you wait around is fictional. Inspiration comes when you do the work. It comes during your work, not before nor after. That said, creative geniuses find the time to be creative. They work harder and longer than anyone hoping that among the hundreds and thousands of work produced, their efforts will pay off. Indeed, there’s almost a correlation between the originality of one’s work and the time they spend churning it. Mozart composed 600 pieces of music with 10 of them becoming his greatest masterpieces. While it’s easy to beat ourselves down for not being able to harness the power of creativity at times, we need to remember that creative minds like The Beatles and Mozart slogged away for a seemingly infinite number of hours only to have a few smashing albums or art pieces take the world stage. “Inspiration exists, but you have to find it working.” — Pablo Picasso Be Bored Out Of Your Mind Take up a piece of blank paper. Enclose yourself in a room with no TV nor any digital device. Sit on a chair for 15 minutes and stare out of your room’s window. Do nothing at all, and you’ll see that this is the best and most boring way to be creative. Agatha Christie, a renowned detective novelist, once said in a BBC interview, “There’s nothing like boredom to make you write. By the time I was 16 or 17, I’ve written several short stories and one long dreary novel.” Why is that so? When we’re bored, two things are happening in our minds. The first is that we have the desire to want to do something, but we don’t want to do anything that’s on offer. Second, boredom fallows our mental capacity. It’s not in boredom itself that births creativity. It’s the process of being bored that leads to creativity. Because feeling bored is as uncomfortable as it is aversive, we’re inclined to look for something else — landing us a chance at discovering something new. Well, this all seems easy, but to benefit from this entire process of being bored, we have to put away digital distractions too. Mindlessly scrolling through your Instagram feed would chip away the minutes or hours you could use to leverage on your creative ability. Borrow Ideas From Creative People “There is no such thing as a new idea. It is impossible. We simply take a lot of old ideas and put them into a sort of mental kaleidoscope. We give them a turn and they make new and curious combinations.” — Mark Twain Perhaps the greatest myth buster of all is that creativity is a reinvention. Apple created the iPhone not because it was a new idea plucked from thin air. It was an improved and futuristic version of the flip phones we used to have. Superhero films are a variety of characters with differing abilities, with good superheroes battling against evil villains to save the city from Mayhem. Writers read other writer’s books and try to emulate their writing styles. The creative process is a remix of the old to come up with a new and novel idea. In other words, the principle of creativity is merely making older things better. Look to people whose works you deeply admire and try to imitate, then package them in your ways. However, understanding your voice, styles, and preferences is the first crucial step before differentiating yourself from the rest. Thank you for reading!
https://medium.com/curious/nurturing-creativity-should-be-a-boring-process-5f1b6bcde836
['Charlene Annabel']
2020-12-19 06:18:46.113000+00:00
['Creativity', 'Life Lessons', 'Self Improvement']
Money vs. Moral Outrage — That’s the Real Dilemma.
The reality is quite simple. Your behavior is your personal currency — that is, unless you’re willing to pay. Our own mental models of experience and perception of value exchange are complex. The Social Dilemma served as an example of the “voice” of the concerned as it relates to privacy and manipulation of perspectives, including the reinforcement of biases. For those of us with decades of experience in the industry, we’ve known, and in many instances used the algorithms and data for the benefit of our clients, while recognizing the issues with data privacy. Here’s the reality. Nothing is free. We’re the product. Our stories, the narrative of our lives, is effectively the monetization engine for platforms such as @Google, @Facebook, @Instagram, @Snap, @TikTok, @Pinterest, etc. It’s this “free” service that requires our content in order to align our behaviors, our desires and our own “idiosyncracies” to the advertising model that underpins the revenue model of these companies. The chorus of concerns over privacy and manipulation by “Big Tech” are, in essence, a hollow argument when we take into consideration the perspectives of the users of the platforms. The reality is straight forward, yet the clarion call to action versus the willingness to change behavior are juxtaposed. In ’19 Facebook’s avg. gross revenue per profile per month was $2.39 USD. Think about that for a moment. This is the advertising revenue generated per profile in return for collecting personal data and reselling it to companies in order to target specific behaviors or predicted needs based on look-alike modeling. For context, my drink of preference, a @Starbucks Venti Vanilla Chai Tea Latte is $5.67. The argument of “moral outrage” as it relates to our concerns over privacy rights rings hollow when survey after survey demonstrate an overwhelming bias toward not paying for services such as Facebook, Google, etc. On average, < 25% of individuals would be willing to pay for Facebook or other platforms that use behavioral targeting. The counter argument entails approx. 75%+ of individuals being unwilling to pay for these services. On an annual basis at $2.45 per month, the total cost would be < $30 or the equivalent of 5 Starbucks Chai Tea Lattes per year. Context matters. For those that are concerned over privacy and platform bias, the reality is you’re either willing to offset the cost of access with your personal funds or you’re willing to use your personal data as the equivalent of your currency. Share your thoughts and perspectives. You can reach me on Twitter — @digitalquotient or on LinkedIn — https://www.linkedin.com/in/bobmorris/
https://medium.com/swlh/money-vs-moral-outrage-thats-the-real-dilemma-e926ed5e2324
['Bob Morris']
2020-11-24 14:53:10.176000+00:00
['Facebook', 'Privacy', 'Social Media', 'Data Science', 'Social Media Marketing']
13 Tips to Losing Belly Fat in Record Timing
Photo Credit: AbsExperiment Most people get a gym membership after they’ve made a firm decision to lose the excess weight, however most them give up way easily and early in the process. The main reasons for this are a combination of a lack of motivation, knowledge and having the right guidance which will never make them transcend their biggest obstacles and get the physique they’ve always dreamed about. Having extra fat on you and acquiring unhealthy habits can be one of the main reasons for having dangerous health conditions like high blood pressure, type II diabetes, obesity and many others. If you want to avoid all this thing, you may want to consider this list of 13 quick fixes that will help you lost the belly fat as fast as possible! 1. Eat Protein During the Entire Day The most important thing to do increase your protein intake. Protein is the most important nutrient needed to build muscle and it can help you tremendously in burning off the excess fat. That’s because it has a very high thermogenic effect, which means that the body spends a lot of calories trying to digest it since it’s much harder to absorb it in comparison to carbs and fat. By spending more calories to digest it, the body expends more calories and energy, thus aiding in weight and fat loss. 2. Start the Day with Fat This might sound illogical but if you plan on burning off fat during the entire day you need to start it off by filling your breakfast with fat rich foods Eating them first thing in the morning practically tells your body to use fat deposits as the main energy source for the rest of the day. Hypertrophy specific exercises are an excellent tool to keep the heart rate elevated during the training session. When you do this type of workout, you do the sets within the 8–12 rep range, resting maximum one minute between sets. This will rev up the metabolism and speed up fat burning. And you should do this consistently. 4. Avoid Processed and Junk Food The food you consume will be the greatest indicator of the body you will build. To build an impressive physique, you need to eat the proper food. Processed food is the worst type of food to eat. Or as it is usually repeated in the fitness world: “If you saw it on TV, don’t eat it”. 5. Incorporate Cardio into Your Training Program Cardio should become a part of your training regimen if you ever plan on losing the excess weight. There’s no way you will see a drastic body transformation if you don’t start doing it. If losing fat is your primary goal, you should do 15-minute workouts on the treadmill, bike or the elliptical pre-or post-workout. 6. Eliminate Alcohol The main reason why you have that inflated belly is because you drink too much alcohol especially beer. We are sure this might be bad news for you if you love drinking beer, but unless you stop drinking it your stomach will get bigger and bigger. That’s why it’s named ‘beer belly. When you or on a cutting regimen, you should eliminate alcoholic drinks altogether. 7. Eliminate Sodas and Soft Drinks There are a lot of people who love to drink soda with their meals. These are the same people you will hear complaining that no matter how hard they try they never seem to lose weight. If you happen to be one of these people, now you have the answer to one of the questions why you cannot lose weight. Saying that all fats are bad for your health is plain wrong. Even though some of them such as trans fats are indeed bad, other kinds like unsaturated fats play a vital role in many essential body processes. Ensure that you eat enough unsaturated fats like sesame and olive oil, peanut butter etc. 9. Add More Fiber to Your Diet It’s much more likely that you will accumulate fat deposits if the body does not use them for various essential processes. Incorporate viscous and soluble fiber in the diet, because it will ease the digestion process. These fibers bind with the water in the intestines, thus forming a thick gelatinous substance that will remain longer in your gut. This will slow down the metabolism and will make you feel fuller during the entire day. 10. Decrease Carb Intake Decreasing your carb consumption is one of the most effective methods to lose excess fat and this has been supported numerous times by lots of studies. When you decrease carb intake, your appetite slowly begins to decrease and you start losing weight as a result. It’s been shown that diets that have low carb intake can increase weight loss by two to three times more in comparison to low-fat diets. Stress is not only detrimental to your mental health; it is also detrimental to physiological health. The more stress you experience the more cortisol is released in your body. Cortisol is also called the ‘stress hormone’. Cortisol increases appetite and stimulates storing of fat in the midsection area. 12. Drink More Green Tea Green Tea is one of the most effective beverages for those wanting to decrease their body fat. It is filled with antioxidants and caffeine, which can rev up your metabolism, which will result in increased body fat burning even when you are resting. 13. Sleep More Sleep is the time when both muscle gain and weight loss occur. Even if you follow all the previous things listed you will not experience weight loss, unless you sleep at least 7–8 hours every night. Research has shown that people who lack quality sleep tend to gain a lot more weight. If this blog offered you any value, please recommend it and share it with others! Also please connect with me on my website, Facebook page, and Instagram if you want to stay in touch or give me any feedback!
https://medium.com/gethealthy/13-tips-to-losing-belly-fat-in-record-timing-99c92bea0c2f
['Jeremy Colon']
2017-02-22 04:49:02.160000+00:00
['Health', 'Fitness', 'Nutrition', 'Metabolism', 'Weight Loss']
7 Things every SEO strategy needs
How to create an SEO strategy 7 Things every SEO strategy needs SEO Strategies needed in 2019 In the past few years, the world has seen numerous revolutions regarding the internet sector. The massive information sector is practically uploaded online and the businesses have grown revolving it. Search Engine Optimisation is a very important term for the online world now as there are thousands of websites operating. So highlighting one’s content, making it more relevant, user-friendly and convenient is more important now more than ever. To make your article more highlighted and to display it on one of the top slots in the search engine, it is very important to make that article SEO friendly. In this article let’s talk about 7 things that you can do to enhance your optimization chances. 1. Building one mind map The mind map is a name that is given to the strategic plan which is needed to create a strong strategy from scratch. A mind map is simply a collective bunch of categories that usually reach and spread out from the middle and then moves on to being general categories from actually categories which were more specific. These ideas become much rougher because of these issues. You have to understand and remember that building a mind map does not mean you are going to visualize your project’s final strategy. It only means that building a mind map beforehand can help to think and plan about your project and not to present your complete plan and complete the project. These are tools that can help you to stitch your plan by coming up with a vision in your thinking process in a way that can make creating the project easier. You can also create simple little plans and issues and then stitch them together or combine them to make a whole plan to fuel the project as a whole. These can also help you to reduce the heavyweight on your shoulders; imposed thereby your strategy over your memory of working process to help you to focus on brainstorming and thinking. If you think that it would not be possible for you to visualize your plan and strategy, you can also use online tools or you can also write down ideas step by step as you think about it thoroughly. Being able to think in a way of nonlinear fashion is the foremost benefit of creating a mind map before touching upon stitching the project itself. If you use the mind map, it will allow you to be able to see all things especially in an arrangement that matches the organized way in which your project plan works. So it can be very beneficial for you to create a mind map to advance your SEO strategy. 2. Representing the plan visually: After your planned strategy starts to become strong enough, a deeper and more professional document will be needed for you other than just the mind map. You will have to always remember that by saying the word strategy signifies a plan. This implies that you got an aim and various specific tasks are attached to these aims. Some works are more important than others which are recurring tasks that will be needed to be honed and iterated and also the subtasks which become more specific and numerous as time passes by. You will have to present all of the plans and strategies easily and simply to the teams and clients and you will also have to remember to do it in a way that is simple enough for every party to edit and of course, understand. You can use various software and apps like Google Sheets etc whichever you like. The exact fact is that the apps which you use are not exactly as important as the method which you take to use it. What is most important is that all parties must be clear in the fact about how to read the plan and if needed, how to edit it. The basic questions which must be clear to everyone are who is assigned which task, which tasks are to be done in which order and what are the updates regarding the tasks which are being done. 3. Clarifying the issues of the Company: Irrespective of whether your SEO is an outsourced SEO or in-house one, it is mandatory for you to have a basic understanding and clarification about the company in which you are going to work to make the strategy regarding the SEO successful. You will have to have firsthand knowledge regarding the strengths which can be used as leverage by you so that you can get the best SEO value possible. It is better to know about the tactics which can work to create the best brand identity. You should also assess the problems which you may have to face in the future to deal with them. Downward let’s give you an overall idea regarding some of the most important factors which you should consider beforehand… What’s the exclusive selling proposition of the product? By the “product” it can be referred to as a single product or a bunch of products, happen regardless of which you need to assess and understand what are those factors which can make your company diverse so that the strategy which has planned can work. This will hugely impact the kind of outreach that will surely make sense and the type of people targeting whom you are creating the product and which are the keywords you will be using and many more such questions. What’s the vision of the company? You need to understand a little more deeply about the industry that you are going to work in if you want to achieve a high number of profits by achieving that kind of publicity for your website and make it more visible in the search engines. You have to go deeper into the vision plan of the company in which you are going to work to search for ideas that can guide you to your path and ultimately to your goal. If you think that the vision which is shrined in the company’s plan is not working for the cause then you must reshape the campaign that will serve the purpose. Where are the weaknesses of the company situated right now? It is one of the basic things which at first looks like a good idea to avoid but always starts to become a pain in the ass later to the ultimate SEO strategy if not solved in the early stages. You need to analyze and understand the shortcomings, limitations, and problems associated with your company before you strategy achieves your commitment. 4. Assessing what the Audience wants Even for banks now, “Know your customer” is a big thing. So for the service sector, it is very crucial to first understand the demands and aspirations of the customers and especially to understand it getting out of the keywords cage by accepting that keywords are not the only mirror of the audience’s wishes. Here you can see some of the things which need to be researched by surveying the audience, talking to the individuals and by getting to know your customers : Are they accepting the upselling and market or not? For those people who have lesser experiences regarding the industry known as self-help, it is very common for pundits to use their audiences by upselling the products by even spending a huge amount of their funds just on advertising rather than on the product’s quality enhancements. Because marketing is a big thing in the online world and the first agenda is to reach your product to the customer, you will have to highly conscious about this fact as you develop your plans. 5. Understanding what the audience knows about these things? You need to have a basic understanding of your audience’s knowledge regarding this. Two types of people are generally seen out there. The first one knows everything regarding the product and genuinely ridicule you if you try to show them introductory material and the second doesn’t understand a thing about your products. Are they close to industry or not? What is very crucial for you to understand is that are your audience businesses or consumers? Are those people connected closely with the industry or not? If not, are they willing to learn more about it or are they only interested in selling their products to earn profits? 6. Clear visions and pointed aim: To make an aim useful one has to make it pointed by focusing more and harder on the problematic parts. Basically by understanding how they mix rather than revolve around an exact amount. You have to be purposeful as much as possible when you choose your KPIs and metrics. Everyone wants their profit margin to grow much higher and much faster than their total investment does to achieve a particular aim or financial goal. But you have to also remember that you should set a goal or an aim which is as par the reality and therefore can be achieved. Making a goal that seems very progressive and sky-touching at first can seem like a very chest-thumping scenario but it is very clear that at the end of not being able to hold up the deal higher the strategy usually crumbles. As a consequence, it can be seen that the business has collapsed in an unprecedented way just because of idealist aims and approaches. 7. Start making strategies according to the realistic goals: So a strategy is everything about achieving particular goals and aims which one has set for their companies. It takes the business to its future by shaping and making its directions and consequences. These also indicate that the metric generally reflects the incidents that are going to happen which the working plans and links the authority signifying the rankings or the search traffic which is called organic traffic also. The point of the matter is that which is the metrics and why should always be a unanimous decision taken by everyone in the company altogether. So that the strategies which are shaped, always come out as per the aims are set for the goals to be achieved. Letting go out of sync can do serious damage to the whole issue of achieving the goals for the companies and might lead the future of the company in jeopardy. Conclusion As it is said before that SEO is very important in today’s online marketing to increase profitability margin etc. Following the mentioned 7 things most of the SEO strategies can surely benefit. More information regarding this, you can see visit our website and see the course link accordingly by simply clicking on https://www.greatlearning.in/pg-program-strategic-digital-marketing-course. By joining this course you will get first-hand knowledge regarding various marketing solutions and online or digital marketing strategic lessons.
https://medium.com/my-great-learning/seo-strategy-needed-in-2019-826f1748f829
['Great Learning']
2019-09-09 11:28:10.512000+00:00
['Business', 'Strategy', 'Marketing', 'Digital Marketing', 'SEO']
Make a clever bitcoin price chart with React and D3.
I’ve been working with React and D3 for the last 2 years building custom interactive charts for both startups and well established companies across the world in my remote work endeavors :). In that time, I’ve really enjoyed the flexibility I get while making visualizations with these tools and I’ll share my vast experience with you. Yes, you! Some people like to look at data visualization as an efficient way of conveying a story, telling some interesting insight somebody has discovered in data and they want to share it with their audience. It’s often said that a picture represents a thousand words and I also like to think data visualization works the same way. In this blog post, we shall use D3, an almost perfect flexible JavaScript library to build a custom 30 day bitcoin price chart powered by coindesk while leveraging React, a JavaScript library for building user interfaces. Here is what we shall build and what I’ll be walking you through this article. A live demo is here if you want to see it in action. Before I got started with D3, I had to do some home work to make sure I was putting my faith on a decent technology and my findings on both npm and GitHub gave me the green light. D3 has: Over 474,000 weekly downloads on npm. Over 89,000 stars on GitHub. 228 releases to date. Over 3,000 dependent packages. And it’s been around for some good 8 years now. Implementation of the bitcoin price chart All I needed to have my 30 day bitcoin price chart was: Root component Fetching raw data from the free coindesk API. Note that coindesk is a news site specializing in bitcoin and digital currencies. Manages updates to the raw data Manages state for interactions that require redrawing of charts (filter, aggregate, sort, etc.) Child component Gets passed in raw data as prop. Translates raw data to screen space. Renders the calculated data. Manages state for interactions that don’t require redrawing of the chart (hover, click). Where to calculate data We shall calculate our data from getDerivedStateFromProps method because it is simple and straightforward. You should also note that D3 calculations can go anywhere (that makes sense for your project) as long as React can access it in its render function. Never ever let D3 and React manage same parts of the DOM! OR BUGS!! — Shirley Wu D3 axes Axes are very important in making the data readable, and D3 makes it easy in creating them. Create axisLeft() or axisBottom() at beginning of a React lifecycle and set corresponding scale. or at beginning of a React lifecycle and set corresponding scale. Create an svg group element inside the render method. Call axis on the group element in componentDidUpdate. Here is how I did it. To get started real quick on this project, I bootstrapped the application with create-react-app, a comfortable environment for learning React, and probably best way to start building a new single-page application in React. Under the hood, it uses Babel and webpack, but you don’t need to know anything about them. I’ll start with code in my root component, App.js which is responsible for fetching data from the free coindesk API, formatting the raw data and passing it to the child component as a prop. App.js import React from "react"; import Chart from "./Chart"; class App extends React.Component { constructor(props) { super(props); this.state = { data: [] }; } componentDidMount() { fetch(`https://api.coindesk.com/v1/bpi/historical/close.json`) .then(response => response.json()) .then(data => { this.setState({ data: Object.keys(data.bpi).map(date => { return { date: new Date(date), price: data.bpi[date] }; }) }); }) .catch(error => console.log(error)); } render() { const { data } = this.state; return ( <div> <h2 style={{ textAlign: "center" }}> 30 day Bitcoin Price Chart </h2> <Chart data={data} /> </div> ); } } export default App; Okay, I understand this is somewhat a lot of code but I’ll try to break it down. What you should focus on is the fetch browser API inside the componentDidMount lifecycle method. When the component is mounted and before the render method is called, A fetch request is made to the coindesk API which returns a promise that is handled appropriately using JavaScript promises. The bitcoin price index(bpi) data returned is in this format "2019-12-05":7404.4033 in an array hence the need to format the dates to actual date objects instead of strings before adding it to state. These actual date objects will be helpful while creating the scale for the horizontal axis on the actual graph. This data is then passed to the child component as props. Chart.js import React from "react"; import * as d3 from "d3"; const width = 650; const height = 400; const margin = { top: 20, right: 5, bottom: 50, left: 60 }; class Chart extends React.Component { constructor(props) { super(props); this.state = { data: null }; } xAxis = d3.axisBottom(); yAxis = d3.axisLeft(); static getDerivedStateFromProps(nextProps, prevState) { const { data } = nextProps; if (!data) return {}; const xExtent = d3.extent(data, d => d.date); const yExtent = d3.extent(data, d => d.price); const xScale = d3 .scaleTime() .domain(xExtent) .range([margin.left, width - margin.right]); const yScale = d3 .scaleLinear() .domain(yExtent) .range([height - margin.bottom, margin.top]); const line = d3 .line() .x(d => xScale(d.date)) .y(d => yScale(d.price)); const minY = d3.min(data, d => d.price); const area = d3 .area() .x(d => xScale(d.date)) .y0(d => yScale(minY)) .y1(d => yScale(d.price)); return { xScale, yScale, data, line, area }; } componentDidUpdate() { this.xAxis.scale(this.state.xScale); d3.select(this.refs.xAxis).call(this.xAxis); this.yAxis.scale(this.state.yScale); d3.select(this.refs.yAxis).call(this.yAxis); } render() { const styles = { container: { display: "grid", justifyItems: "center" } }; const { data, line, area } = this.state; return ( <div style={styles.container}> <svg width={width} height={height}> <path id={"line"} d={line(data)} stroke="#6788ad" fill="transparent" /> <path id={"area"} d={area(data)} fill="#6788ad" style={{ opacity: 0.2 }} /> <text transform={`translate(${width / 2 - margin.left -margin.right}, ${height - 10})`} > Dates for the last 30 days </text> <text transform={`translate(15, ${(height - margin.bottom) /1.5}) rotate(270)`} > Amount in USD </text> <g ref="xAxis" transform={`translate(0, ${height - margin.bottom})`} /> <g ref="yAxis" transform={`translate(${margin.left}, 0)`} /> </svg> </div> ); } } export default Chart; Okay I know this is really a lot of code because it’s where the heavy lifting takes place. You shouldn’t be worried though, I got you on this! Let’s start breaking it down. None of the following steps is very difficult, it’s just chaining them all together that was a little tricky, at first. Before anything else, you need to first install D3 from npm or yarn. For npm users, use the command npm install d3 and for yarn users, use the command yarn add d3 Step 1 — Receiving the data const { data } = nextProps; if (!data) return {}; Our focus in file should be on the getDerivedStateFromProps method in the Chart component. It receives props from the root component and pulls of the data prop. If the data is not available, it will do nothing. Step 2 — Calculating the horizontal and vertical scales const xExtent = d3.extent(data, d => d.date); const yExtent = d3.extent(data, d => d.price); const xScale = d3 .scaleTime() .domain(xExtent) .range([margin.left, width - margin.right]); const yScale = d3 .scaleLinear() .domain(yExtent) .range([height - margin.bottom, margin.top]); If the data is available, the both the horizontal and vertical scales are calculated by leveraging D3’s scaleTime() scaleLinear() methods. The results are stored in the xScale and yScale variables respectively. In both cases, the margin is considered while specifying the array values for the range. Step 3 — Calculating the line and area const line = d3 .line() .x(d => xScale(d.date)) .y(d => yScale(d.price)); const minY = d3.min(data, d => d.price); const area = d3 .area() .x(d => xScale(d.date)) .y0(d => yScale(minY)) .y1(d => yScale(d.price)); return { xScale, yScale, data, line, area }; The line and area of the chart are also calculated by leveraging D3’s line() and area() methods and the results stored in the line and area variables respectively. The getDerivedStateFromProps method will then return the xScale, yScale, line and area variables together with the data prop which can then be accessed in the render method from state. Step 4 — Rendering the line and area to the svg. <path id={"line"} d={line(data)} stroke="#6788ad" fill="transparent" /> <path id={"area"} d={area(data)} fill="#6788ad" style={{ opacity: 0.2 }} /> Inside the render method, we return an svg element with both width and height properties. We then append two path elements for the line and area while passing them the data from state. At this point, we should be able to see the line and area chart in the react application. Step 5 — Adding the axes xAxis = d3.axisBottom(); yAxis = d3.axisLeft(); Just above the method getDerivedStateFromProps, we created two axes. The x-axis and the y-axis by leveraging D3’s axisBottom() and axisLeft() methods. <g ref="xAxis" transform={`translate(0, ${height - margin.bottom})`} /> <g ref="yAxis" transform={`translate(${margin.left}, 0)`} /> Inside the svg element in the render method, we add two group elements each with a ref to either the xAxis or yAxis. componentDidUpdate() { this.xAxis.scale(this.state.xScale); d3.select(this.refs.xAxis).call(this.xAxis); this.yAxis.scale(this.state.yScale); d3.select(this.refs.yAxis).call(this.yAxis); } These refs are used in the componentDidUpdate method to call the axes on the group elements but before doing that, we need to first add the scales to the axes. Okay, I’m on the home stretch now :) Step 6 — Labeling the axes <text transform={`translate(${width / 2 - margin.left -margin.right}, ${height - 10})`} > Dates for the last 30 days </text> <text transform={`translate(15, ${(height - margin.bottom) /1.5}) rotate(270)`} > Amount in USD </text> The final step involves adding labels to our graph’s axes. Imagine a graph without labelled axes, Huh?! This means some information would still be missing about the graph. So we add two text elements to the svg element in the render method. One text element will add a label for the x-axis and another for the y-axis. And you’ve successfully visualized bitcoin’s 30 day price chart with React and D3. Not too tough. Conclusion At first glance, visualizing bitcoin data with React and D3 seems a bit daunting. Actually D3 helps make a major factor (creating the paths and scales) simple. And once that’s done, the rest is pretty simple to go about. Check back in a few weeks or even days, I’ll be writing about adding a brush with React and D3 to a bar chart so that only data within a specified domain can be retrieved or something similar about React and D3. Thank you for reading, I hope this gives you an idea about how React and D3 work together to create custom data visualizations. I have a keen interest about data visualization with React and D3. If you follow me on Twitter, I won’t waste your time. ?
https://medium.com/analytics-vidhya/make-a-clever-bitcoin-price-chart-with-react-and-d3-e6359d604b54
['Livingstone Asabahebwa']
2020-01-10 08:38:15.914000+00:00
['D3js', 'React', 'Data Visualization', 'JavaScript', 'Bitcoin']
Mining Twitter Data for Sentiment Analysis of Events
Twitter is a rich source for information. From minute-to-minute trends to general discussions around topics, Twitter is a great source of data for a project. Also, Twitter has built an amazing API for developers to use this data. Could we track an event and see what people are thinking? Is it possible to run sentiment analysis on what the world is thinking as an event unfolds over time? Could we track Twitter data and see if it correlates to news that affects stock market movements? These were some of the questions in my mind as I began to dig into Twitter data recently. Let’s Use Twitter for Sentiment Analysis of Events If you prefer to listen to the audio version of this blog, I have also recorded a podcast episode for this blog post — where I go into more details of each of the step including caveats and things to avoid. You can listen to it on Apple Podcasts, Spotify or Anchor.fm, or on one of my favorite podcast apps: Overcast. Let’s get right into the steps to use Twitter data for sentiment analysis of events: 1. Get Twitter API Credentials: First, visit this link and get access to a developer account. Apply for access to Twitter API Once you register, you will have access to Consumer Token, Consumer Secret, Access Key as well as Access Secret. You will need to mention the reason for applying for API access, and you can mention reasons such as “student learning project” or “learning to use Python for data science” as reasons. 2. Setup the API Credentials in Python: Save your credentials in a config file and run source ./config to load the keys as environment variables. This is so as to not expose your keys in a Python script. Make sure to not commit this config file into Github. We will tweepy library in Python to get access to Twitter API. It is a nice wrapper over the raw Twitter API and provides a lot of heavy lifting for creating API URLs and http requests. We just need to provide our keys from Step 1, and Tweepy takes care of talking with Twitter API — which is pretty cool. Run pip install tweepy to get the tweepy package in your virtual environment. (I’ve been using pyenv to manage different versions of Python, and have been very impressed. You’ll also need pyenv-virtualenv package to manage virtual environments for you — but this is another blog in itself) In Python you can type: import os import json import tweepy from tweepy import Stream # Useful in Step 3 from tweepy.streaming import StreamListener # Useful in Step 3 consumer_key = os.getenv(“CONSUMER_KEY_TWITTER”) consumer_secret = os.getenv(“CONSUMER_SECRET_TWITTER”) access_token = os.getenv(“ACCESS_KEY_TWITTER”) access_token_secret = os.getenv(“ACCESS_SECRET_TWITTER”) auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_token_secret) api = tweepy.API(auth) This will setup your environment variables and also setup api object that can be used to access Twitter API data. 3. Getting Tweet Data via Streaming API: Having setup the credentials, now it’s time to get Tweet data via API. I like using Streaming API to filter real time tweets on my topic of interest. There is also Search API which allows you to search for historic data, but as you can see from this chart, it can be little restrictive for free access, with maximum access to data of last 7 days. For the paid plan, based on what I saw online, the price can range anywhere from $149 to $2499/month (or even more) — I couldn’t find a page for exact pricing on Twitter website. To setup the Streaming API, you will need to define your own class method on_data that does something from the data object from Streaming API. class listener(StreamListener): def on_data(self, data): data = json.loads(data) # Filter out non-English Tweets if data.get("lang") != "en": return True try: timestamp = data['timestamp_ms'] # Get longer 280 char tweets if possible if data.get("extended_tweet"): tweet = data['extended_tweet']["full_text"] else: tweet = data["text"] url = "https://www.twitter.com/i/web/status/" + data["id_str"] user = data["user"]["screen_name"] verified = data["user"]["verified"] write_to_csv([timestamp, tweet, user, verified, url]) except KeyError as e: print("Keyerror:", e) return True def on_error(self, status): print(status) I have not included write_to_csv function but it can be implemented using csv library, and some examples can be seen here. You could also save the tweets into a SQLite database esp. if there are several hundred thousand tweets. SQLite will also allow you to have command line access to all the information via SQL commands. For CSV you will have to load into a Pandas dataframe in a notebook. It just depends on which workflow you prefer. Typically you can just save into the SQLite database, and use read_sql command in pandas to make it into a dataframe object. This allows me to access data both from command line as well as pandas. Finally, run this function stream_and_write to start the Streaming API and call the listener we have written above. The main thing is to call Stream API using extended mode as it will give you access to longer and potentially informative tweets. def stream_and_write(table, track=None): try: twitterStream = Stream(auth, listener(), tweet_mode='extended') twitterStream.filter(track=["AAPL", "AMZN", "UBER"]) except Exception as e: print("Error:", str(e)) time.sleep(5) Another important thing to note is the number of items that you can track using Streaming API. From my testing, I was not able to track more than 400 or so items in the track list. Keep this in mind while building out your ideas. 4. Get Sentiment Information: Sentiment Analysis can be done either in the listener above or off-line once we have collected all the tweet data. We can use out-of-the-box Sentiment processing libraries in Python. From what I saw, I liked TextBlob and Vader Sentiment. TextBlob provides a subjectivity score along with a polarity score. Vader provides a pos , neu , neg and a compound score. For single sentiment score between -1 and 1 for both libraries, use polarity from TextBlob and compound from Vader Sentiment. As per the Github page of Vader Sentiment, VADER Sentiment Analysis. VADER (Valence Aware Dictionary and sEntiment Reasoner) is a lexicon and rule-based sentiment analysis tool that is specifically attuned to sentiments expressed in social media, and works well on texts from other domains. For TextBlob, from textblob import TextBlob ts = TextBlob(tweet).sentiment print(ts.subjectivity, ts.polarity) # Subjectivity, Sentiment Scores For Vader: from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer analyzer = SentimentIntensityAnalyzer() vs = analyzer.polarity_scores(tweet) print(vs["compound"], vs["pos"], vs["neu"], vs["neg"]) Saving the sentiment information along with tweets, allows you to build plots for sentiment score for different stocks or events over time. Vader vs TextBlob will depend on your own project, I also tried ensemble of the two above libraries, but somehow liked the simplicity of just using one library for different tasks. Ideally, you’d train your own sentiment analysis model with things that matter to you, but that’d need collecting your own training data and building and evaluating a machine learning model. For being able to capture negative emotion in sentences like “I expected an amazing movie, but it turned out not to be”, we will need models that can work with sequence data and recognize that the not to be is negating the earlier amazing , this needs models built from Long Short Term Memory (LSTM) cells in neural networks. Training and setting up a LSTM network for sentiment analysis could be another blog post of its own, leave comments below if you are interested to read about this. 5. Plot Sentiment Information: I plotted the sentiment when Manchester United lost to Barcelona 3–0 in their Champions League Quarter-Finals. As you can see, the sentiment drops over time, and as the teams played on 16th Apr around the afternoon mark, the sentiment starts to drop. Sentiment Dropping as Manchester United lose to Barcelona While the sentiment score worked quite well for a sports event, what about stock market? Below is the Qualcomm (QCOM) performance during the week when Apple dropped their lawsuits with Qualcomm (exact news of this came on April 16th). We’d expect a significant increase in positive sentiment around this news, but it is very hard to conclusively see it below: QCOM stock performance during the week when their lawsuit with Apple was dropped This makes the challenge of extracting alpha from sentiment analysis much more challenging. I feel that it is a great tool for seeing sentiment around news events or sports activities but trying to co-relate sentiments with stock market performance is harder as it involves filtering out noisy tweets and also doing a lot of feature engineering work to get signal. 6. Set this up on AWS or Google Cloud Platform:
https://towardsdatascience.com/mining-live-twitter-data-for-sentiment-analysis-of-events-d69aa2d136a1
['Sanket Gupta']
2020-11-06 00:49:05.241000+00:00
['Data Visualization', 'Sentiment Analysis', 'Twitter', 'Data Science', 'Python']