title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
885
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
Extreme Athleticism Is the New Midlife Crisis
For some, a midlife crisis arises from a fear that the weaknesses that have dogged you are not just temporary challenges to cope with, but a permanent part of who you are. The chance to confront those weaknesses and recognize the hold they have over us is a key benefit of ultra-marathons, Cross-Fit, or whatever other endurance sports people turn to. Christine Cassara was one of these people. She and her husband were struggling with fertility issues, and at one appointment the doctor told her he’d rather treat a 40-year-old than someone who was obese. The cruelty of the comment took her aback. Cassara was pushing 40 and weighed 340 pounds. After a couple of false starts, Cassara lost over 200 pounds with the help of a diet, but she worried about backsliding, so she tried running. Beginning with a couch-to-5K plan, she worked her way up to half marathons and signed up for a full marathon only to realize she didn’t actually like running all that much. A friend suggested triathlon, and she started small with pool laps, short bike rides, and mile runs. Cassara fell in love with the sport and completed Sprint and Olympic triathlons. In late August she traveled from her home in St. Petersburg, Florida, to Copenhagen for her first Iron Man. “In the back of my mind I always considered myself a quitter,” Cassara says. “The most important thing it’s done in all aspects is to give me the will to continue and not quit.” Others experience a midlife crisis as a sense of slackening, of lost focus, or ambition. That’s what Lisimba Patilla, a 44-year-old sales manager from Medina, Ohio, by way of Flint, Michigan, felt when he discovered triathlons. Three years ago, the former Division-II college football player and track athlete worried he had grown complacent in life and was losing his edge. On a business trip to Reno, a cousin recommended a book on triathlons, and Patilla was so inspired he called his wife and told her he was going to be a triathlete. There was one significant problem. He nearly drowned when was 12 and the experience left him so traumatized he wouldn’t let water from the shower hit his face. “If you fall off your bike and get a wound on your leg, you can still get on that bike,” Patilla says. “When you have a traumatic experience, it puts a wound on your mind, and it becomes a recurring nightmare.” Patilla bought the thickest wetsuit he could find and experimented with a half-dozen snorkels. In his first triathlon attempt, he made it 500 meters before being pulled out of the water. From that point on he told himself that he was going to swim like everyone else. Patilla went to a pool twice a day and learned how to swim in the shallow end. He competed again a few months later and completed his first sprint triathlon. Extreme fitness is less about being young again and more about building yourself up for the years ahead. “I can’t tell you I didn’t panic,” he says. “I can’t tell you a grown man didn’t cry. But I got through it. When I got done, I was exhausted, but I knew at that point I could do this.” He did, and in doing it, he gained a measure of clarity about what he’s capable of. “Triathlons don’t lie,” he says. “At 44 years old I need that.” When Suzanna Smith-Horn burned out on the corporate lifestyle in her 40s, she sold her shares in her startup and quit her job. Her friends thought it sounded fantastic to have all that free time, but Smith-Horn struggled with the loss of identity. “The reality was that’s a really tough place to be,” she says. “Because you’re trying to figure out, what should I be doing in life? Who am I? What’s my purpose? I went into these places in life where I was pretty depressed.” She started running, and her existential question was answered. She ran a marathon and then advanced from there to 100-mile races. With a career in tech sales, Smith-Horn, now 51, is able to work from her home in the Upper Valley of Vermont where she has access to a wide assortment of trail systems. There are days when it’s hard to get out the door, especially in the bitter cold of winter, she says, but after a few miles, her mind clears. “Sometimes you’ll be like, where am I?” Smith-Horn says. “You’re in the zone. Nothing else really matters and you’re just there. It doesn’t come overnight. You learn every race, every trail. You’re constantly learning. You have to learn how do you take care of yourself. You really have to learn how to manage yourself for hours on end without a lot of support.” This realization was hard-earned. During the winter of 2016, Smith-Horn slipped on a patch of ice and broke her neck. Her doctor told her that running was off limits and so was hiking, but she had the Grindstone 100, an ultramarathon, on her schedule that fall and that was non-negotiable. She walked every day for 4–5 hours with her neck brace to maintain her fitness. Eight months after her fall, the 51-year-old finished the 100-mile race in the Allegheny Mountains, in just over 31 hours, beating half the field of finishers and coming in 10th among all female runners. Not that place has much relevance to her. “I’m a 50-year-old middle of the pack,” she says before catching herself. “Ehhhh, I hold my own. Everyone has a story and there’s an importance to everyone who’s out there, whether they’re finishing a course in record time or the last one finishing. We’re all doing the same thing.”
https://elemental.medium.com/extreme-athleticism-is-the-new-midlife-crisis-d87199a18bed
['Paul Flannery']
2019-11-01 18:59:14.438000+00:00
['Trends', 'Health', 'Running', 'Great Escape', 'Aging']
Week 10: My Keto Story
How do you know this story is super real? Because I am a real flake. I have not posted for five weeks. The not posting for five weeks also coincides with me not being consistent at adhering to the Keto diet. Guess what that means? I’ll give you the sequence. Not eating Keto regularly (lots of cheat days). Mostly not being in Ketosis. Also, I was sometimes eating lots of crap because once you are “off the wagon” you tend to get extra crazy just because you know sometime soon you will go back on the wagon and you just want to eat all the shit while you can. Stop losing weight during weekly weigh in. Start gaining a pound or two during weekly weigh ins. Stop doing weekly weigh ins. I did add going to the gym to my project about 8 weeks ago and it is POSSIBLE that also contributed to the weigh loss slow down. BUT it is possible that the problem was my own sloppiness. Just sayin’. The weigh gain is due to eating shit. Regularly. For about a month. So, to recap: there has been no posts in My Keto Journey, because there has been no KETO JOURNEY. And now it’s time for a reboot. For comparison’s sake — STATS at start. STATS TODAY: AGE: 44 HEIGHT: 5'4" WEIGHT: 155 lbs (!) Yep. Right back where I started 10 weeks ago. Wheeee! (That was sarcasm.)
https://medium.com/my-keto-story/week-10-my-keto-story-7ed4e4601d48
['Mary Lucus-Flannery']
2017-01-02 21:06:31.554000+00:00
['Weight Loss', 'Diet Fail', 'Ketosis', 'Health', 'Ketogenic']
“I Started to Trust Myself More”: Seneca B Retraces Her Artistic Development During The Making of ‘Bloom’
Things are moving rather quickly for Palo Alto-based producer Seneca B. Since posting her first song on SoundCloud a little over three years ago and releasing her debut Rascal EP in May of 2015, Seneca is now on pace to break 3 millions Spotify streams by the end of 2018 — a remarkable rate of progress for an independent musician in a highly saturated genre. The Bandcamp version of Bloom. With Seneca now juggling grad school and her growing popularity as an artist, she graciously offered to take a break from her busy schedule and talk to me about her October 2016 release Bloom — the album she considers her most important to date. Notable for being Seneca’s first full-length collaborative album, Bloom combines her lush instrumental compositions with vocalist-assisted tracks from London-based producer A D M B. The two artists demonstrate a natural chemistry throughout the project, while the mix of strictly instrumental numbers and songs with vocals makes for a potent listen. Taking a moment to look back on Bloom now, Seneca feels like the album was a watershed moment in her brief but successful career. “Bloom is probably my favorite,” she tells me. “It was a unique experience getting to work with ADMB on the whole thing and it felt like I was taking a bit of a leap into music. It felt more like me than my prior projects.” “It’s all on you to make sure it sounds good together, rather than relying on a really dope sample to do it for you.” In addition to taking a leap and feeling more like herself with the music she produced, the creation of Bloom helped Seneca define her artistic voice and gain the confidence to move away from strictly sample-based production. Finding her voice made Seneca trust her own abilities more. “I became much more comfortable with what types of sounds I liked and wanted on my mixes,” Seneca explains. “Before I felt like I always had to compare myself to other songs, but on this project I started to trust myself more.” The Bandcamp version of Seneca’s Friends instrumental album. The process of trusting herself and taking risks helped Seneca create a new level of depth and texture in her music while pushing her creative limits. “A lot of the songs where I didn’t use samples, I was able to learn more about song structure and how to make the sounds,” she says. “On ‘Rain’, I played the keys, drums, and recorded bass to it.” Though being responsible for all of the live instrumentation in her beats wasn’t always easy, this new way of composing songs was deeply satisfying. “It’s a really different experience putting your own non-sampled stuff together, but I think it’s very rewarding,” she explains. “It’s all on you to make sure it sounds good together, rather than relying on a really dope sample to do it for you.” “Before I felt like I always had to compare myself to other songs, but on this project I started to trust myself more.” As Seneca recorded “Rain”, she learned to appreciate the patience needed to get the sound of each individual instrument just right. “Getting the instrument to sound how you want it to while recording is a challenge,” she says. “Finding the groove and melody you want isn’t really that hard, but then timing comes into play. ‘Rain’ was the first song I played every single element out on instruments, so it was hard getting the bass and piano sounding right together. But it gets easier as time goes on.” The Bandcamp version of Seneca’s debut Rascal EP. The rewarding challenges that came with the making of Bloom weren’t limited to Seneca recording her own live instruments, as she also employed a number of found sounds during the recording process. Seneca used her iPhone to capture snippets of jingling car keys, scribbling, and paper crunching to make the sublime track “Wednesday”, while “Pineapple Soda” features her using a pineapple soda bottle to help fill out the percussion section. Although Bloom features an increased use of found sound and live instructs, Seneca still wanted to mix in some sampling on the album. “It was nice getting to combine those types of songs with more sample heavy ones like ‘Haunted’,” she says. “It helped me feel like my sound was more coherent, as opposed to separating my music into sample-based vs. electronic stuff.” “It’s also cool sometimes to have a song that means something to me mean something else to a bunch of other people. It shows how subjective everything really is.” The process of making “Haunted” reminded Seneca of the emotional power of sampling, as she found a record with special resonance during a difficult transitional point in her life. “‘Haunted’ is probably one of my favorites because I made it in a trio of beats all using the same album for samples,” she says. “‘Haunted’, ‘May’, and the beat in ‘Passing Notes’ with Jay Squared and Amber Alyse were all made after a weird breakup-type situation, so they all hold a bit of significance for me.” Micro-Chopping Seneca B — an exclusive 21-track playlist of Seneca B production. Seneca made three songs in a swift, cathartic creative burst that all happened rather quickly. Now she looks at each track as an important time capsule. “I made all three of those songs in a few days and ran with the vibe that I had gotten into,” she says. “I have a habit of making songs in twos and threes, so it’s always cool to me to hear them in different projects because they’ll always remind me of each other.” Having these powerful remembrances of a specific time her life remind Seneca how grateful she is to have music as a way to express herself. “It’s nice to have an outlet and I appreciate it,” she says. “It’s also cool sometimes to have a song that means something to me mean something else to a bunch of other people. It shows how subjective everything really is, but still lets me keep the original meaning just for myself.” Regardless of how interpretations of the music may differ from listener to listener, Bloom remains “a show of how far I’ve come and in what direction I want to start taking my music,” for Seneca over a year since its initial release. Whether Seneca’s next full-length project is a collaboration or a solo effort, Bloom will endure as a testament to the value of moving outside of your comfort zone, pushing your creative boundaries, and channeling your truest emotions into your work.
https://medium.com/micro-chop/i-started-to-trust-myself-more-seneca-b-traces-her-artistic-growth-during-the-making-of-bloom-6ec2eeb57885
['Gino Sorcinelli']
2019-08-19 23:58:16.149000+00:00
['Playlist', 'Producer Interview', 'Hip Hop', 'Music', 'Album Breakdown']
Will The Expected Wave Of New Millionaires Be Good For San Francisco?
With many Californian companies planning to go public soon, there will be an influx of thousands of new millionaires in San Francisco. In 2018, 34 startups from the Bay Area went public, and 23 of them closed the year above their IPO offering prices. Uber, Pinterest, Lyft, and Airbnb are still private companies at the moment, but have huge estimates, which means that once they do go public, large amounts of money will flood the town next year. Airbnb was recently valued at $31 billion, while Pinterest and Lyft got estimates of $12 billion and $15 billion respectively. Meanwhile, estimates for Uber on the market went as high as $120 billion. While these figures don’t mean anything at the moment, as there is no sure answer to what prices the companies will actually command when they go up on the stock exchange, predictions are that hundreds of billions of dollars will enter San Francisco, with thousands of new millionaires being part of the city. According to Herman Chan, a real estate agent working with Sotheby’s, even if just half the IPO’s end up happening, there will be an overnight influx of ten thousand millionaires. Given the new-found fortune that the new millionaires will soon enjoy, they will want to spend a lot of it on cars, houses, new businesses, and luxuries. On the other hand, the president of Climb Real Estate, Christine Kim, talked about the fact that millennial tech workers are not looking to own cars, but they simply want to stay close to entertainment, which means that they will remain in the city. $5 Million House Prices Denis Kahramaner, who is a real estate agent that specializes in data analytics at Compass, talked about the future status of the real estate market. Talking to a crowd of people, he posed the questions of whether or not in 5 years there will be a one-bedroom condo that will be worth less than a million, and whether or not there will be single family homes that would sell for 1 to 3 million. To the surprise of the crowd, Kahramaner pointed out the fact that single-family home sale prices in San Francisco could actually climb up to an average of $5 million. With the idea that there will be a large number of IPOs coming up, people looking to sell their houses ended up taking them off the market. As a result, the California housing market ended up softening, with home sales being currently down. People feel like they should wait until next year in order to sell their house, when the city will be filled with billionaires. With a number of IPOs taking place at the same time and potentially thousands of young people with large amounts of money, they will look to buy homes, which will end up impacting the housing market. Menlo Park’s Facebook and Mountain View’s Google went public at one point as well, however, their workers were spread out across the Bay Area, which means that the impact their IPOs had on the real estate market ended up being diffuse. The situation is different nowadays however. Many start-ups are based in San Francisco, thanks in part to the tax break that the city has, and brokers point out the fact that workers and startups will remain in the city. Back in 2018, there were a total of 5,644 properties that were sold in the city, only 2,208 of them being single family homes. According to Compass, software employees are over 50% of those buying. Those that are currently in the market for a house are seeking to buy one fast while the number of houses for sales shrinks but before the wave of IPOs hits the city. The founder of Omni, Tom McLeod, who has been renting for close to a decade, pointed out the fact that he felt that if he didn’t buy before the IPOs would hit, they would be priced out forever. It’s important to note that the housing industry is expensive as it is — given the fact that 60% of tech workers are not able to afford homes in the area with their current situations. While the Bay Area is already swimming in money, the new waves of millionaires will make it even more difficult to get a home at a price that is affordable. According to U.S. Census data from 2018, the median household income is $98,710. Meanwhile the median price of a home in the metro area is approximately $900,000. Throwing Caution To The Wind While there is no doubt that a lot of money will flood San Francisco, many experts feel that a lot of young people don’t take into consideration the fact that the tech stocks are volatile. Companies are instilling in their employees the belief that stock goes only one way — up — and some startups have been asking their employees to hold on to that faith for a very long time — about a decade. Ryan S. Cole, who works for Citrine Capital as a private wealth adviser, pointed out his worry about the influx of new clients who are preparing to be wealthy, due to the fact that this generation seems very bullish when it comes to the success of their companies. Cole pointed out that they have been trying to get them to be more cautious, and he believes that they don’t think there could be a downturn. It’s important to note that Uber, for example, continues to be dramatically unprofitable, with Mr. Cole reminding his clients that no one can say for sure how well a stock will do once a startup goes public. IPOs can end up being busts, despite the fact that they might have initial success in the beginning. For example, Snaps opened at $27 per share, and is currently trading at $9. Similarly, Groupon opened approximately $26 per share, and is trading nowadays $3. According to Ryan S. Cole, a lot of people are young, and they don’t understand the fact that tech stocks are volatile, given that they have only seen their valuations going up. Managers also instill that mentality, painting pictures of where the company is going in order to get employees to work harder. One piece of advice to take from Mr. Cole is the fact that they should not be spending too much yet. Another private wealth adviser who works in the region, Jonathan K. DeYoe, has been around tech clients during the dot-com boom in 1997 as well, and points out that back then things were pretty exciting. Nowadays, however, thinking about the millionaires that will hit the city, he worries about the inequality in the region. While he points out that some people were talking about pitchforks, he notes that things will not go that far, but that wealth will be very visible. $10 Million IPO Parties When talking about IPO parties, party planning communities point out that they are going to surpass booms that took place in the past. Jay Siegen, who owned a live music club and now is curating private entertainment and music notes that he has worked on events for plenty of people from the IPO crowd, including Airbnb, Postmates, Lyft, Slack, and Uber. According to Siegen, there are multiple parties per IPO for the companies that are taking part in that process and the associated firms. The budgets for those parties can go above $10 million easily, with Siegen noting that attractions for the parties are A-list celebrities who are brought in to perform at tables for the executives, as well as ballet performers. He also noticed a trend among his clients, which is the idea of curating their own theme concerts, with fleets of bands. For example, Siegen talked about a 1980s themed party that featured Devo, B-52s, Flock of Seagulls, and Tears for Fears. Besides the celebrities and popular bands, another luxury is getting ice sculptures. Robert Chislett is the founder of Chisel-it, and works in a warehouse located in Concord, California around 15 ice sculptors who are currently employed, noting that the he is staffing up for what he believes will be a long year. The sculptors chiseled a full-sized car for a tech executive party that takes place in Atherton, as well as a 10-foot ice Taj mahal for the swimming pool of a tech executive in San Jose. Mr. Chislett notes that executives in the process of IPOing want predictable things, such as ice cubes with the company logo for drinks, logos carved onto ice rockets (to signify the idea that the company’s stock will spike), and an ice chair with the company’s logo on the back. Protests Coming Up While many are preparing for their future wealth, there will be a tech backlash coming up. Housing rights activists have gathered recently in the Mission district at Radio Habana Social Club. There is a sense of repetition by now, as Sarah “Fred” Sherburn Zimmer, who serves as the executive director in the Housing Rights Committee of San Francisco, with the choreography being that cash flooding in the city and the stock-less masses gathering. There will be protests against evictions, fights against developers, and protest against tax breaks in front of tech buses. While Sherburn-Zimmer noted that it feels like the same game, the associate director, Maria Zamudio pointed out that they have lived through boom periods before, and that they have learned their lessons — which means that they will not make the same concessions they did in the past. To Charities or Back In the Market? People who have new-found fortunes generally make their charitable donations when they book a big gain in income, this helping them to avoid having to pay a big tax on that gain. IPOs are generally seen as good opportunities for donors to make contributions. Millionaires are more sophisticated when it comes to finding tax breaks. Many expect that the wealthy will put money into donor-advised funds, which provide tax benefits, but they don’t require for the money to get distributed to charities right away. On the other hand, there is also money that advisers in the wealth and philanthropy sectors are expecting to see go back into the market. The process is seen as being standard fare when it comes to Silicon Valley, with it including building a personal investment portfolio, and starting a private foundation in the case of billionaires. Given the fact that the money was made in startups, the general sense is that some of the money will end up flowing back into the tech ecosystem, thus funding the entrepreneurs and IPOs of the next generation.
https://suiting-up.medium.com/will-the-expected-wave-of-new-millionaires-be-good-for-san-francisco-c4f01fe3848b
['Suit Up']
2019-04-01 20:41:02.116000+00:00
['Real Estate', 'Business', 'Startup', 'San Francisco', 'Economy']
Why Sex?
Where we learn between sexual and asexual reproduction, and why there could be such as things as sex at all Photo by Ousa Chea on Unsplash I don’t remember exactly when we did it for the first time, but it was probably over one billion years ago and we were still very small organisms made out of single cells, a bit like bacteria still are. We felt that asexual reproduction was coming to its limits, and that sexual reproduction would be the right way to go. But why? What urged us to do it over and over again for the million years to come? To get a glimpse of why sex arose one has to imagine how the world was before sex started. It was a world mainly populated by small organisms, such as bacteria and single-celled precursors of plants, and they all multiplied simply by cloning each other. They thus created more or less identical scores of twins from generation to generation. In fact, today these small organisms still do it the same way, while most modern animals do it sexually (with very few exceptions in the realms of insects, fish, reptiles or birds). Then, nearly one thousand million years ago, it suddenly all changed. We decided to reproduce sexually from then on, meaning that we made a big difference between male and female, that we decided to mix up our genes each time we made new babies and that we wanted each offspring to have different traits. In short, it was the beginning of individualism. But why did that happen? Well, the reason was most probably that the mixing of genes between each new generation allowed us to compose new traits quicker, traits that would fit better those highly changing environments in which we were living. Evolution could thus act quicker and more easily on each of us individual offspring. Now, you need to know here that animals don’t reproduce so often as bacteria do (humans for example have a generation time of approximately 25 years, while bacteria reproduce every half of an hour!). So one of the main advantages for sexual reproduction might have been that it allowed slowly reproducing organisms to give rise to many different individuals each time they reproduced, by mixing their gene pools. However, this reason for the existence of modern sex is just a hypothesis and the definitive argument for why we do it is still a major scientific mystery. But science is made out of good hypotheses. This one would imply that sex evolved as an alternative to a short life. So, even if we don’t really know why we’re doing it, it is quite sure that when it comes to evolution, sexual reproduction has given us some very practical results, “even if that is not why we do it”, to paraphrase the Nobel Laureate Richard Feynman. By the way, it is interesting to note that today, after having been sexual human for hundreds of thousands of years, we seem to slowly be rediscovering our asexual origins in the last decades. In fact, genetic technologies and the discovery of stem cells are making cloning just a bit more practical than ever. So we might soon go back to asexual reproduction.
https://medium.com/martinvetterli/why-sex-9ca4722c28e0
['Martin Vetterli']
2018-12-20 09:13:34.437000+00:00
['Biology', 'Algorithms', 'Science', 'Evolution']
The Increasing Influence of Social Media on The Future of SEO
The subject of how and if social media effects ranking on the organic web has long been a subject of controversy in the SEO world. While it’s true that links still play a crucial role in retaining visibility, and Google has long stated they don’t count social media much in terms of ranking, these trends are likely to die more sooner than later. I have bad news for those who’re still clinging to their links in hopes of ranking, as the Internet’s audiences have long been making a paradigm shift from a once dominant organic search to a socially propelled mobile world. As you’re about to see, the question of whether or not social media plays a role in SEO is likely to shift into, “how does SEO play a role in social media?” Today, the average user’s first point of connection to the internet is no longer primarily organic search engines, but rather social media, and this trend affects both desktop and mobile traffic alike. No matter how you paint it, social media is “What’s Hot” today, there’s just no more denying it. According to a report from Statista (referenced at end of the article), 78% of American internet users alone now have a social profile somewhere, just to note the significance of an ever shifting web. The same report shows social usage soared from a mere 24% in 2008 to the staggering 78% it sits at today. What’s even more striking, look at the user numbers for Facebook alone. The platform had an estimated 1.5 billion users at the start of January 2016, and by the end of the first quarter had already gained another 165 million! Now, note that organic traffic has been consistently falling for some time. Many speculate the drop in organic searches to be attributable to a rise in mobile users. While mobile usage is definitely rising quickly, this doesn’t mean those organic drops are solely the result of an increase in mobile usage. In fact, desktop users too are putting more of their time and attention to social media, and in many ways social gaming as well. Steam has over 135 million members, all of which are primarily desktop PC gamers (Which I’ll touch more on in a moment). Another thing to consider here is that, while organic search engines typically determined the rank of content on the web, the game changes completely with social media, as users now determine what gains visibility and what does not. Internet users are now the ones who determine the worthiness of what and who they interact with, not search engines. A clear example of this is with the communities I manage on Google Plus. People post constantly, and our visitors decide what they engage with, and what they ultimately want to ignore. They decide what communities they join, who they become friends with, and what content they share, not Google. Even with platforms like Facebook who’ve heavily manipulated the visibility of brands, publishers, and bloggers in hopes of forcing them to pay, their recent announcements they’re putting more priority over personal posts is a clear indicator that their attempts to dictate what people see and do has backfired. People don’t like being told what to do on social media, and if Facebook keeps it up, they will only continue to alienate their audiences even more. While virtually all social media platforms have algorithms that help point focus to certain features, content, people, etc, in the end the people themselves are the real judge, not the social platforms. The Shift of SEO From Organic To Social: The tactics used to optimize sites today are only going to become more and more irrelevant over time. While the main point of SEO has always been to optimize for search engines, those who fail to focus their SEO efforts on optimizing for a social audience will fail over the long term. With search engines, publishers were at the mercy of single entities, but with social you’re at the mercy of every living breathing human being. The future of SEO will be focused on convincing people, and no longer the search giants who once ruled with an iron fist. Even with the case of organic search engines, Google knows themselves that having one entity judge the worthiness of every idea that traverses the web isn’t going to be sustainable in the long term. The fact is, if you really want to serve people what they want, like and think they need, then you need to let the people themselves be the judge. Shifting Audiences: Besides the ever increasing shift from organic to social, one heavily overlooked area of the web is with gaming networks. People often think of gaming platforms as being a place of entertainment and nothing more, but they’re actually social networks in their own right! Gaming networks like XBOX Live, Playstation Network, and even the Wii U, have staggering amounts of users who spend their free time playing games and socializing with friends on their networks of choice. XBOX Live is at about the 50 million mark, while Playstation Network has well over 110 million users, and the WII U appears to gaining speed as well. Collectively, all of these platforms including Steam have broken the quarter billion mark and are growing faster than the organic web ever will. That quarter billion mark also doesn’t account for the hundreds of millions of mobile gamers out there! Still think the traffic is on Google search? Wrong, it’s shifting towards Youtube and Twitch! Internet users spend just a few minutes interacting with the organic search engines like Google and Bing, while Youtube users are heavily engaged, and it’s not odd for a single person to spend hours on the platform. The same goes for those gaming networks as well. What does all this mean for publishers? They need to learn to adapt to a socially driven web or fail! If the bulk of internet users are using Facebook, Twitter, Youtube, Google+, along with mobile apps like What’s App and Snapchat, where does this leave publishers who rely on Google search? Yep, dead in the water. And there’s more, as the real battle for user interest on the web will soon be pitted between the standard social networks and gaming networks. The average user spends just 20 minutes a day on Facebook, according to this report from Zephoria Digital Marketing: Top 20 Valuable Facebook Statistics (Right click to open in new tab). Think that’s a lot? Better think again. Users on gaming platforms spend closer to an hour a day playing games and socializing with other gamers on their platforms of choice. In fact, social gaming is now the world’s #1 form of entertainment, even over Hollywood. Now, Take a look at this report from Jason Evangelho on Forbes: Pokemon Go To Surpass Twitter Users, which states the widely popular mobile title “Pokemon Go” is about to surpass Twitter in terms of daily active users. Yes, just one mobile game has a larger audience than an entire social network that’s been around for years! Many in the SEO and marketing world may have noticed the steep drops in engagement across virtually all standard social platforms in recent times. You can thank gaming platforms for this! When new consoles and popular titles hit the market, the web literally shrivels up and dies for days on end! Only one standalone social platform has managed to integrate social gaming on a mass level, and that would be Facebook. If the other social platforms like Google Plus don’t get back into the gaming arena, they could see themselves in a lot of trouble retaining an audience. Although Google has one huge advantage, they own Youtube, the world’s most utilized video sharing platform, which helps to solidify their control of traffic on the social web. Look at it however you want, the organic web is dying, and it won’t be long before Google Search itself becomes completely integrated with Google’s social platforms. Either way, the volume of organic search queries is falling rapidly, and the focus of internet users remains heavily entrenched on social media. The Push Toward Interactive Media: It doesn’t take a rocket scientist to figure out the web is becoming more interactive with each passing day. This isn’t 2006 when social media was still in its infancy, Myspace was the ruling social media party, and most people spent their time surfing the organic web. Welcome to the world of interactive media, where live streaming is the norm, video, gaming, and mobile apps are the preferred forms of entertainment, and people prefer to discuss their favorite subjects rather than simply read about them. Today, the traditional publisher is only secondary to the social platforms people use to reach them. The Current State of Social Media and SEO: While many keep arguing that links are all that matter, the fact is Google has stated time and time again that the +1 button is quote, “A recommendation for content in Google search” and across its social platforms. When someone +1’s something, they’re publicly recommending that content to their followers, and for Google to show it in search. The bigger question is, what are publishers who placed their bets on links going to do when the major search engines take a backseat to the integrated search features available on social media today. In fact, the average social user tends to use the integrated search features of their favorite social platforms more than they do the traditional standalone ones. Think about it? Does one really need to leave Google+, Facebook, or Twitter to get the latest news? No, they’re going to use the search capabilities these platforms already provide. Also, today the latest news often hits social media before it ever makes its way into organic search. Most stories originate on social media these days, and that’s a fact. And what about Bing? Microsoft’s Bing search engine already heavily takes social media into account in terms of ranking your site. In case no one noticed, Bing actually asks you to include links to your sites interconnected social profiles to help determine your rank. It won’t be long before Google decides to follow suit as well. Social Drives Organic Rank: While Google Search is still the primary traffic driver for publishers at present, and links still count, social media remains the dominant force of driving rank, even if indirectly. For most publishers today, it’s the first point of contact for news published content. Plus, social platforms push content out in front of more people, giving publishers more chances to pick up new links to their content. Other metrics like reach have always been a major component of Google’s search algorithms, and the effective use of social media can definitely increase ones reach across the web, effectively driving their rank higher. Conclusion: Without question, the audiences of the modern day web are shifting their wants and needs in line with those of the ever changing technologies that are available to them. The web is quickly transforming into a fully interactive multimedia environment where people are more socially connected than ever before. It’s a digital experience where social media, gaming, mobile apps, and emerging technologies like virtual reality are only going to continue to prevail. So the question isn’t how do we harness social media in hopes of ranking, but rather how do we apply SEO tactics to increase our visibility on social media, and how do we adapt experiences we offer users on our sites to fall in line with the wants and needs of a digitally interactive environment? The future of the web will see gaming, mobile, social, and VR merge as one. How will web publishers respond? Better yet, how will web publishers find ways to integrate the modern technologies that people love into the experiences they offer? All being said, the enormous growth of social media is set to forever change the future of SEO. Written and published by Daniel Imbellino — Co-Founder of Strategic Social Networking and pctechauthority.com. Many thanks for reading. Be sure to check out Strategic Social Networking Community on Google+ to connect with tens of thousands of IT professionals and learn effective strategies to grow your social presence online. You’re also welcome to follow Strategic’s brand page on G+ for the latest social media and IT industry news. NOTE: Strategic Social Networking is Funded by The Public! Consider supporting our work on Patreon: Additional References: Statista: Social Media and User Engagement Report 2016
https://medium.com/strategic-social-news-wire/the-increasing-influence-of-social-media-on-seo-3079950afd04
['Daniel Imbellino']
2018-02-07 09:11:08.952000+00:00
['SEO', 'Google', 'Rankings', 'Social Media', 'Social Networks']
No Document? Here is How TDDD Save Your Life
Why No Document? Documents are important. Everybody knows. Then why we have such numbers of missing documents? Photo by Annie Spratt on Unsplash Engineers are lazy by nature. They try to find the optimal route to complete their work. Their primary focus is to deliver the value of the product to the customer. Naturally, minimal documentation is compiled. Documentation First Typical project response in supportive measure. Somebody says “hey, we should have more documents”. Project members start compiling documents. As a result, a good amount of the areas are covered by the new documents. However, this approach is not perfect. First, we lose the context of the design at the time we write the docs. “Hey, do you know why this design look like this?” “Ah, that was designed by Smith who left the project”. This is the typical example of the loss of context. Or even if you are the original designer, are you able to remember the design decision you made one year ago? Second, the post-documentation process cannot cover all the topics you need to document. Once the intensive documentation activity is done, the project continues as usual. Then you will suddenly find that some of the design is missing. Then you need to go back to the documentation activity again. We can say that we need to add more documents as we find. But it is still continuous pain. As time passed from the time it designed, documentation becomes much harder. Why not we design a “pre-documentation” process than the “post-documentation” process. Ticket Driven Design Doc Here Ticket Driven Design Doc (TDDD) helps your life. I have been working on management in several different projects and teams and found TDDD promotes pre-documentation culture to the team. I assume that you are using some ticketing system to handle the development process. You can put “documentation” and “documentation review” as part of the workflow. Here is the example workflow in a typical story-based agile project: Typical TDDD Workflow. The workflow enforces developer to write/review document first before every single development Every development ticket has documentation as part of the process. This forces engineers to make a document for any changes. Wait, putting documentation as part of the workflow? It sounds obvious. Whats’ the point? Well, we have another key driver for TDDD. You make a document based on the ticket number. Here is the example: Example of Ticket Numbered Document This might remind you of RFCs. Yes basically, we leverage the same idea. Ticket number based document have some characteristics: Documentation Status and Authenticity Clarified When you use a collaborative documentation tool (like Confluence), you will find some documents whose status is unclear. The document might be draft, might not be up to date, might be reviewed by anybody, or might be even just a private note and not official. Putting ticket number will clarify the status of the documentation. It also clarifies that who reviewed the document and who approved. Clear Mapping of Development and the Doc Since the ticket number is clarified, mapping of the development is clear for everybody. This will allow project manager to see the status of the documentation. If feature development is undergoing with no documentation, the project manager can ask engineers to write a doc, and ask them to have the review. Change Part Clarified As development activity and doc is mapped, the document clarifies the changes. If you keep updating the master document, the reviewer of the document might be unclear which part is already developed and which part is the change we design now. Ticket number based document forces engineers to write the changes. This will help document reviewers in general. This will also help QA to understand since they generally understand the changes. Documents Linked We can link the document with some relationship like RFCs. If you are incrementing the changes, you can link the documents as “Update”. If you are replacing the old feature with a new one, you can link it as “Obsolete”. You can clarify what kind of “changes” you are describing in the doc by the linkage. Readers of the documents can also be aware of any update thanks to the document link. Mitigating Weakness of TDDD TDDD is not perfect. Since TDDD is focusing on the changes, it’s not good at updating the master documents. Let’s imagine that we have API specification. TDDD does NOT guarantee that API spec to be updated. As TDDD is focusing on the change, it’s weak for maintaining master. We need strategy to mitigate the weakness. Some automated documentation tools, like Doxygen or Swagger, can be one of the solutions. TDDD Promote Pre-Document Culture As I clarified the above, TDDD is not a perfect solution. However, TDDD promotes pre-documentation culture to the team which is the fundamentals of the development process. TDDD grantees also guarantee the documents’ authenticity. I had several successful experience with this approach in past few small team. Now I am promoting similar approach to several new activities. Hopefully I can provide more updates by a post in the near future.
https://yuichi-murata.medium.com/no-document-here-is-how-tddd-save-your-life-3d00af3a584
['Yuichi Murata']
2020-11-29 14:02:01.880000+00:00
['Engineering', 'Documentation']
CSS Grid Component Patterns
Photo by Patrick Hendry on Unsplash For the longest time, web layout has been one of the hardest problems to solve. This has primarily been due to the fact that the web was originally designed around documents and how a document flows. As the web has changed, the need for proper layout solutions have been in high demand to be added to the CSS spec. The first item to be added was flex-box. Flexbox was designed to give better control over the layout in one direction, either row or column. It allows you to define flexible width items that will grow and shrink according to the parent container size as well as how to position the elements inline. Though Flexbox does have rules around wrapping elements when they start to overflow out of the parent container, flex-box was never designed around creating a 2-D grid. Luckily, a new CSS specification was added shortly after flex-box: CSS Grid. With CSS Grid we now do a few important things that we have been lacking. First of all, we can now define our layout in rows and columns at the same time. It doesn’t matter if we use a combination of explicit and implicit rows and columns definitions, the biggest thing is we no longer need a library’s n-column grid system to achieve layouts. The second hugely impactful thing that was added was the concept of a gap. A gap defines the margin in between the child elements of a grid container. Though this doesn’t sound like much, this concept now allows us to achieve the much-sought-after layout where we have uniform margins in between elements, but not around the edges of our layout, like this example I stole from MDN: To achieve something as simple as the example above in a responsive way took combinations of extra div containers and negative margins. With CSS Grid, this can now be defined with a single line of CSS. The gap property has been so powerful that it is, as the day of this writing, being introduced to the flex-box spec and multi-column spec and is supported by some of the browsers already. How Do We Use CSS Grid in Components? We now live in the age of components. Much like how atoms and molecules can be combined to create organisms, we can create simple, single-purpose components that can be composed together to build some sophisticated and amazing components. This definitely includes layout. CSS Grid works fantastically in this model and allows us to abstract out awesome layout primitives that, when combined with other primitives, can achieve some sophisticated layouts. Before we go into these patterns, we must address the elephant in the room: Internet Explorer. Does IE support CSS Grid? The short answer is no. The more complicated answer is it actually supports an earlier draft of the CSS Grid Spec, which is no longer compliant with the current spec. So what does that mean for those of us that ‘support’ IE? First of all, I would like to dispel the myth that ‘support’ means the same experience. Our app may support both desktop and mobile screens, as well as several other sizes in-between. While each screen size will be very similar, there is no expectation that we will have the exact same experience at each device we use. Instagram.com, for example, only allows uploading photos from the mobile version of their site and many websites will remove content, such as pictures, once space becomes a premium. In that same vein, supporting IE does not mean that it has to be the same experience as modern browsers. The IE version of Grid is still very powerful. Maybe 80% of what you want to achieve, can be supported by the old spec and the rest is close enough. Maybe you have to write a fallback layout using flexbox. The point is, that you can stay cutting edge, while still supporting browsers that are not now cutting edge. For the rest of this article, I am going to go through a couple of CSS Grid component patterns from easy to more complicated(which is still very easy) using React and styled-components, because they make it easy to show these patterns in a modern framework, but also because I like using them. Nothing I am showing you, however, requires either one and can be achieved, through minimal adaptation, in your tech stack. This includes vanilla CSS and HTML. The Stack One of the most basic, yet most used layout needs is putting one or more things on top of each other with consistent gutters in between. From form labels to paragraphs of text, to social media feeds, they all need to stack one thing on top of another with a consistent gutter. This is especially true when working in a design system with predefined layouts. This is easy enough to achieve like this: const Stack = styled.div` display: grid; gap: 1rem; grid-template-columns: 1fr; ` <Stack> <Header/> <Content/> <Footer/> </Stack> With those three lines of code, we have our Stack component. Let’s break down what each line does. display:grid turns this element into a grid-container and each one of its immediate children will be a grid-item. This happens automatically, no need to select the child elements and mess with them, these will be grid-items just by being children of an grid-container. The next line defines the gap value, or what I prefer to call the gutter. This can be any valid CSS size value, such as px, rem, em, %, ch, vh, vw and so on. It simply puts a gutter in between, but not around, each grid-item. The last line is how we define our columns. When we use grid-template-columns we are providing a template of how our columns should be made up. grid-template-columns takes a list of CSS size values and a template of how wide each column is and how many columns there are. For example, to achieve a 3 column layout of 100px you would simply put grid-template-columns: 100px 100px 100px; and it will automatically place each grid-item into a 100px wide column (with the gap in-between). Nothing says the columns have to be uniform, you can do 100px 500px 10vw and it would still be 3 columns, but each column would have different widths. You might be asking, what happens if we define 3 columns, but we have 4 grid-items? The first three items will be placed in their respective columns, but when we get to the fourth item, it will be placed under the 1st item in the 1st column (once again, with the gap in-between them). This is because we have implicit rows defined that, by default, will be as tall as the tallest grid-item in the row. So when we exceed the number of columns in the template, the browser will automatically move the item into the next implied row into the next available column. There is also a property called grid-template-rows the behaves exactly like it’s column counterpart for rows. In our example above, we are using a size value unique to CSS Grid called the fr unit which stands for a fraction. The power of fr units will be shown in the next example, but suffice it to say, it will take a width of one fraction of the available space. In our case, since we are only defining 1fr this will be the entire width of the grid-container So to summarize, by setting the display property to grid our element will become a grid-container and at the same time, each of the children will become grid-items. We are setting a gap of 1rem to that will define the space between the grid-items. Finally, we are explicitly declaring a single column width of 1fr or one fraction, which effectively is all the available width. In a real layout, we would want to vary our spacing by exposing the gap prop, typically mapping a more semantic name to an underlying spacing length. The convention I like the most is t-shirt sizing, so we can adjust the above props to be like this: const sizes={ xs:'0.125rem', sm:'0.25rem', md:'0.5rem', lg:'1rem', xl:'2rem', xxl:'4rem' } const Stack = styled.div` display: grid; gap: ${props => sizes[props.gutter] || sizes.lg}; grid-template-columns: 1fr; ` <Stack gutter='xxl'> <Header/> <Content/> <Footer/> </Stack> Now we can declare a gutter size otherwise it will default to lg size which is 1rem. Once again, this is how you could do it with styled-components, but you are not limited to it. The Split Another very common layout need is to put one thing next to another. In fact, I would have to say that, after vertically centering an element, putting something next to something else has got to be “THE” struggle of the web. This is where the Split component comes in. Its whole purpose is to take one thing and put it next to something else while keeping that consistent gutter in-between. Let's look at how it is made: const Split = styled.div` display: grid; gap: ${props => sizes[props.gutter] || sizes.lg}; grid-template-columns: 1fr 1fr; ` <Split gutter='xxl'> <Main/> <Aside/> </Split> The Split is almost identical to the Stack, except one important difference. We are now declaring two columns of 1fr each. When using fr units the browser will first figure out how much space is left after calculating all gaps and non-fr lengths, such as px or % and then it will divide that remaining space. In our above example, by declaring two columns of 1fr, we are effectively doing a 50% split. You might ask, isn’t that the same as 50% 50%? Even though it seems like it is, it’s not. Remember how I declared that the browser calculates the remaining space? If we were to use 50%, this doesn’t take into account the 1rem gap, so we would end up creating an overflow of 1rem, which is not what we want. The other powerful thing about 1fr units is that we can describe our column sizes in terms of ratios. More often than not, we probably don’t want to split evenly in half, we might want a 1/4 split or a 2/3 split. This is where the fr units really shine. If we changed the template to be something like 2fr 1fr, the browser will divide the remaining space into 3, and give the first columns two of those part and the second column 1 part, giving us a 2/3 split. In fact, we can declare a whole list of these options like this: const fractions={ '1/4':'1fr 3fr', '1/3':'1fr 2fr', '1/2':'1fr 1fr', '2/3':'2fr 1fr', '3/4':'3fr 1fr', 'auto-start':'auto 1fr', 'auto-end':'1fr auto', } const Split = styled.div` display: grid; gap: ${props => sizes[props.gutter] || sizes.lg}; grid-template-columns: ${props => factions[props.fraction] || factions['1/2']}; ` <Split fraction='2/3'> <Main/> <Aside/> </Split> You will notice that I added auto-start and auto-end to the list. This allows us effectively to say, that column will take up the size it needs and then I will give the rest of the width to the other column. This is great for things like Like input with a button next to it: <Split fraction='auto-end'> <input/> <button>Search</button> </Split> There are so many more component patterns you can do with CSS Grid and I will share them with you in a future post. In the meantime, if you want to see these two components in action, check out both of these components over at my component library, Bedrock Layout Primitives.
https://medium.com/the-non-traditional-developer/css-grid-component-patterns-8b472d26fdbe
['Justin Travis Waith-Mair']
2020-05-20 14:01:01.483000+00:00
['React', 'Programming', 'JavaScript', 'Css Grid', 'CSS']
Java Abstract Class What Is It Good For?
The Java abstract class eludes many Java developers. Let’s learn what it does for us and how to use it. Abstract art: a product of the untalented sold by the unprincipled to the utterly bewildered. Al Capp I am guessing you have heard of the malady called ADD or Attention Deficit Disorder. On a recent trip to Paris, my son and I discovered we are suffering from another malady with similar initials. Art Deficit Disorder. We can look at paintings and sculptures and find it uninspiring. Where my daughter enjoyed the d’Orsay we looked for the food court. Where we enjoyed some espresso and fresh-squeezed orange juice. Java Abstract Class Java has abstract classes that are incomplete. They cannot be implemented like a regular class. Abstract classes must be subclassed to be used. In these classes, we can declare abstract methods. Abstract classes are similar to interfaces in Java. Let’s dive deeper into this comparison. Compare Like interfaces, abstract classes cannot be instantiated. Where the interface just will contain method signatures an abstract class can contain a method body. Abstract classes can declare fields that are not static and final. Where interfaces have all fields automatically become public, static, and final. We can implement as many interfaces as we want. Abstract classes are like regular classes and we can only extend one. The Java tutorial has some good guidance when to use abstract classes. When we “want to share code among several closely related classes” or “expect that classes that extend your abstract class have many common methods or fields”. Interfaces should be used when “expect that unrelated classes would implement your interface” or “want to specify the behavior of a particular data type”. Java Abstract Class Example Like all good coders let’s get our hands dirty with some code. First, we can look at an example abstract class to get us started. This Battery abstract class has one implemented method and two abstract methods. There are also two fields defined as well. ComputerBattery is a concrete Java class. Therefore, it needs to implement both of the abstract methods that Battery defined. Abstract and Interface? An abstract class can even implement an interface. This seems like mixing spaghetti and mashed potatoes but, hey it is legal! Let’s take our Student interface and mix it in an abstract class. In our ProbationaryStudent class, we don’t implement all the methods defined in the Student interface. This is possible since the class is marked as abstract. Main? Would you think if you had the main method in an abstract class that it would run? I didn’t think it would either but in fact, it does run. I suggest you try it out for yourself. As you can see abstract classes have their place in Java. Similar to interfaces but used in a different way. Where have you used Java interfaces? Check out more great content and subscribe at MyITCareerCoach.com
https://medium.com/swlh/java-abstract-class-what-is-it-good-for-880cf0117648
['Tom Henricksen']
2019-10-16 07:54:32.279000+00:00
['Java', 'Software Development', 'Learning To Code', 'Programming']
3 to read: Answering political threats | The economic case for investigations | ‘Data diaries’ for learning
By Matt Carroll <@MattatMIT> Cool stuff about journalism, once a week. Get notified via email? Subscribe: 3toread (at) gmail. “How do we respond to threats after our Hillary endorsement? This is how”: With grace, dignity and steely resolve. The Arizona Republicpublishes in a Republican stronghold — and it had never endorsed a Democrat for president in its history, until this election. Their endorsement of Hillary Clinton ignited a wave of threatening responses. To those who sent in “vile threats,” the Republic’s Mi-Ai Parrish delivers a measured response that touches on forgiveness, the First Amendment, and fierce determination to be fair. A brilliant response. An economist makes the case for saving investigative journalism:Investigative reporting is expensive and time consuming. For that reason, many newsrooms have reluctantly dismantled or cut back on investigations to deploy scarce resources in other spots. Yet Stanford Prof Jay Hamilton argues in his book, “Democracy’s Detectives,” that investigative teams actually make economic sense. They help newsrooms differentiate and strengthen their brand against competitors, and hugely benefit society. For those worried about the state of investigative journalism (ie: me), it’s a refreshing read. Book review by Rick Edmondsfor Poynter. Creating a ‘data diary’ to understand your news consumption: How many of us give active thought to how we consume data? Where we go and how we use it? Miguel Paz instructed his students to track their data consumption over a week. They learned fascinating lessons about themselves and the media. (Transparency alert: Miguel and I attended a similar class together.) Get notified via email: Send note to 3toread (at) gmail.com Matt Carroll runs the Future of News initiative at the MIT Media Lab.
https://medium.com/3-to-read/3-to-read-answering-political-threats-the-economic-case-for-investigations-data-diaries-for-84f2d12933e4
['Matt Carroll']
2016-10-19 10:48:40.691000+00:00
['Hillary Clinton', 'Media', 'Journalism', 'Data', 'Trump']
Things You Wanted to Know About Networking
Things You Wanted to Know About Networking …and some corny networking jokes that you didn’t. What the heck is TCP/IP?! TCP/IP is a model for describing how communication across networks typically occurs from the application layer of a source machine (HTTP), through to the physical layer, through to the destination machine and an application layer on that machine receiving a packet. TCP/IP uses IP addresses to communicate between a source and destination host across a network. A TCP packet walks into a bar and says, “I’d like a beer.” The bartender replies, “You want a beer?” The TCP packet replies, “Yes, I’d like a beer.” …🥁 So… IP Addresses? In simple terms, an IP address is a number identifying a device on a network. There are two forms of IP address — IPv4 and IPv6. IPv4 address usage is still the most commonly used — this is despite the creation of IPv6 in 1998 as a means to address the inevitable future shortfall of IPv4 addresses. In this article, I won’t focus on it, but you can learn more about IPv6 below: IPv4 addresses are divided into 4 octets of bits and represented in dot-decimal notation. The four octets of bits have the following possible ranges: [0 - 255].[0 - 255].[0 - 255].[0 - 255] An example IP address can be found below (this one is for OpenDNS): 208.67.220.220 An IP address can be converted to a binary form, and this often happens when working with IP addresses. Here’s the above IP address in binary form: 11010000.01000011.11011100.11011100 The best thing about IPv4 jokes is that you can tell them 254 times before they’re exhausted. …🥁 Are some IP addresses special? Yes. Across the 2³² possible IPv4 addresses, we have the following reserved ranges: 10.0.0.0/8 IP addresses: 10.0.0.0 — 10.255.255.255 IP addresses: 172.16.0.0/12 IP addresses: 172.16.0.0 — 172.31.255.255 IP addresses: 192.168.0.0/16 IP addresses: 192.168.0.0–192.168.255.255 According to RFC-1918, each of these ranges (including the loopback address range — 127.X.X.X ), are reserved for private use only — that means they can’t be allocated for public use over the public internet. Trying to connect to an IP address in these ranges will not work unless you have a network interface on your private network which has routes defined for these ranges. For example, many people’s home router management IP address falls somewhere in the range of: 192.168.0.0–192.168.255.255 If I connect to 192.168.0.1 , I’m not going to connect to your home router management interface, instead the connection will route through my wlo1 network interface — which you can see is responsible for the part of the range which contains 192.168.0.1 : 3: wlo1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff inet 192.168.0.37/24 brd 192.168.0.255 scope global dynamic noprefixroute wlo1 valid_lft 860858sec preferred_lft 860858sec Tell me about subnets! Subnets are just subdivisions of a larger network. You can subnet both a private and a public network. Congested Networks Subnetting as a concept can serve many purposes, but one of the original purposes was to ease network congestion. In the past, when a network became too monolithic, the traffic running across the network would run more slowly. The sheer volume of traffic would overload network hubs and there would be an excess of “packet collision” incidence. This is where packets sent at an identical time would collide and destroy one another, thus leading to a delay and packet ‘reset time’, before packets would be able to be resent. Subnetting a network into chunks means that traffic always stays within a designated subnet unless it is destined for an alternate subnet — only at that point will a given transmission packet cross subnet boundaries via a router. This concept ultimately means broadcasted packets are more distributed across nodes in a network, as opposed to overwhelming specific network communication hotspots and therefore causing congestion for the network as a whole. This practice of subnetting for easing network congestion is still somewhat relevant today, however, due to the emergence of the practice of using physical “switch” based networks — which routes traffic directly from a source machine to a destination, as opposed to “hubs” — networks of today generally do not suffer as much from the adverse effects of network congestion. So why do we still subnet our networks? Easing Administration Another one of the most common purposes of subnetting is to facilitate the identification of machines on a network and for easing the burden of administration of them. Let’s take an example — say we have a corporate network that spans the following private IP range: 10.0.0.0–10.255.255.255 That range has 16,777,216 potential IP addresses - this really is an awful lot of addresses to keep track of for a network administrator who, for whatever reason, might need to perform patches, upgrades and other work on specific isolated groups of hosts. To ease this burden, we could choose to subnet this network to make the best use of it — then with the subnets, allocate them for specific and different real-world purposes. Subnetting by Geography Let’s split our hypothetical 10.0.0.0 — 10.255.255.255 network into 2 subnets by geographical boundaries, and have the following address range for machines in our hypothetical US office branch: 10.0.0.1–10.127.255.254 And the following address range for the machines in our UK branch: 10.128.0.1–10.255.255.254 Subnet Identifier and Broadcast Address You’ll notice that I’ve omitted the first and last IP addresses of each of the ranges I’ve described — i.e. 10.0.0.0 , 10.127.255.255 & 10.128.0.0 and 10.255.255.255 are all omitted in the above ranges. The first and last IP addresses of a subnet serve special functions on a network and the above ranges I’ve written are just the “usable” IP addresses of the subnet — i.e. the ones that can be assigned to a host. The first IP address of a subnet range is known as the subnet identifier — this is an IP address which is effectively never assigned to a host and is instead just used as a signpost to a specific subnet. — this is an IP address which is effectively never assigned to a host and is instead just used as a signpost to a specific subnet. The last IP address of a range serves as the broadcast address —this is a special IP address in a subnet, where whatever packets are transmitted to it will be broadcast to all hosts attached to the given subnet. The worst thing about a broadcast joke is that you have to tell it to everyone in order to find the one person who gets it. …🥁 …Anyway, if we are trying to find the origin of an IP address on our hypothetical corporate network such as the one below: 10.0.0.231 Because we have subnetted our hypothetical network by real-world geography, at a glance, we can easily identify the IP address as originating from the US office building. Hooray! 🇺🇸 The above is a very contrived example, but this is a pretty handy thing to be able to do when your network is very large and responsibility for it is spread out across multiple different offices, countries or continents. We don’t need to stop there, and we could further subnet our subnets, to break down this IP address range even further — perhaps we could have a subnet per department, room or floor of these office buildings? There aren’t many hard and fast rules on what is good subnetting practice, but often real-world physical divisions and real-world geography can make perfect sense as network divisions too. Classful IP Addressing With 4 groups of 8 bits, an IPv4 address has a size of 32 bits, which means of the pool of possible IPv4 addresses in this address space, there are 4,294,967,296 (2³²) IPv4 addresses. With this relatively large address space comes the problem of how do we most efficiently use it, how do we decide who owns what and what segments of this address space should be reserved for what specific purposes? As a solution to this need to carve up the growing internet, IPv4 classful IP addressing was introduced. It essentially broke all IPv4 addresses into distinct classes which had specific purposes. Class A IP addresses were to be used for huge networks, like those deployed by Internet Service Providers (ISPs). Class A IP addresses support up to 16 million hosts — hosts are any device that connects to a network (computers, servers, switches, routers, printers…etc.) and a Class A network can be divided into 128 different networks. 0.0.0.0 - 127.0.0.0* *Any address starting with 127.X.X.X is considered a “loopback” address and therefore not allocated to by via public or private networks. A loopback address is an address which routes back to the originating machine. Class B IP addresses were to be used for medium and large-sized networks in enterprises and organizations. They support up to 65,000 hosts on 16,000 individual networks. 128.0.0.0 - 191.255.0.0 Class C addresses were the most common class and were to be used in small business and home networks. These support up to 256 hosts on each of 2 million networks. 192.0.0.0 - 223.255.255.0 Class D and E addresses were not commonly used. Class D was reserved and was only to be used for special cases such as for applications to stream audio and video to many subscribers at once. Class E addresses were reserved for research purposes by those responsible for Internet networking and IP address research, management, and development. 224.0.0.0 - 255.255.255.255 Network Prefix and Host Identifier IP addresses can be broken down into two portions: The network prefix , the part of the IP address which is used by routers to determine where on the network a packet for a given IP address should be routed to. , the part of the IP address which is used by routers to determine where on the network a packet for a given IP address should be routed to. The host identifier, the part of the IP address which provides specific information on the destination host once a packet has reached the correct locality of the network. In classful networking, the rules for deriving these divisions of an IP address are specific to a given class: Classless Inter-Domain Routing (CIDR)! In 1993, the state of the Internet changed with the introduction of CIDR. CIDR was introduced to replace classful networking and to help slow down the depletion of usable IPv4 addresses. IPv4 address space walks into a bar and shouts: “One strong CIDR please, I’m exhausted!” …🥁 Unlike Classful IP addressing, the CIDR system uses non-fixed host and network portions of IP addresses. Instead of being fixed, with CIDR the network prefix and host portion of an IP address range was made to be derivable through a subnet mask. Amongst many other things, this change resulted in the ability for organisations to more flexibly and efficiently size their corporate networks — where previously with classful IP addressing — large and medium-sized organisations were left to choose between networks composed of either 256 hosts or the next possible size up: 65,536 . Subnet Masks and CIDR Notation. So we’ve learnt that subnets are just “chunks” of a larger network, where a network itself is just a range of IP addresses. We also have learnt about classful and classless IP addressing architecture. I was going to tell you a joke about the CIDR block, but you’re all too classy for it… …🥁 …So the question you’ve all been wondering about… How Does a Network Router Know Where to Send a Packet? As previously covered, IP addresses can be divided into two fields, the network prefix and the host ID/portion. In classful networking, depending on whether an IP address is of class A, B, or C, the general rules of each specific class would determine what part of this IP address corresponds to a network prefix. With the network prefix available, the router can know which route to send a packet to reach a destination host. With the advent of CIDR, the parts of the IP address that are designated as the network prefix and host ID are no longer fixed, and therefore you cannot determine whether a given IP address is on a local or remote network from the IP address alone (although designated IP addresses such as the loopback address you can) — this is where subnet masks provide the missing piece of the puzzle. Example Subnet Mask 255.255.255.0 The above subnet mask can be used in combination with an IP address to highlight the portions of an IP address that correspond to the network prefix and the portion that corresponds to the host ID. This is through a conversion of both the subnet mask and IP address into a binary form and then a subsequent bitwise AND (&) operation on the bits: 11111111.11111111.11111111.00000000 (255.255.255.0) And the IP address (10.0.0.231) in binary form: 00001010.00000000.00000000.11100111 (10.0.0.231) And the bitwise AND combination of the two: 11111111.11111111.11111111.00000000 (255.255.255.0) & 00001010.00000000.00000000.11100111 (10.0.0.231) Allowing us to derive (from the masked and unmasked portion of the IP address): Network Prefix 00001010.00000000.00000000.00000000 (10.0.0.x) Host ID 00000000.00000000.00000000.11100111 (x.x.x.231) You can view the subnet mask as the answer to the question: “…what are the significant bits for identifying the network prefix of this IP address?” CIDR Notation CIDR notation can be seen as a shorthand for communication of an IP address or range and a given subnet mask. The CIDR notation works by combining a given IP address, followed by a number in a range of 0–32 that is prefixed by a slash, e.g. 10.0.0.0/24 The above CIDR block communicates quite a lot of information and it can be easy to interpret once you understand the derivation. CIDR blocks can be used to communicate IP address ranges, all possible IP addresses in IPv4 space ( 0.0.0.0/0 ), as well as representing a single IP address. For instance, a single IP address in CIDR notation is: <some-ip-address>/32 For example, if we take the Google DNS IP address: 8.8.8.8 In CIDR notation, this would be 8.8.8.8/32 . Where /32 corresponds to a subnet mask of: 255.255.255.255 — which when applied to an IP address, gives no masked portion — therefore serving as a pointer to a single unique IP address in IPv4 space. To calculate an IP address range from a CIDR block is relatively simple. Worked Example: For the CIDR block: 10.137.1.0/24 Take the number trailing after the slash (denoted here as n ) and calculate 32-n . The resultant value will give you the number of bits that corresponds to your subnet mask for the given subnet. 32-24 = 8 Convert 255.255.255.255 into binary; starting from the right-hand side, set the value of each bit to 0, up until you’ve done this 32-n times. This will give you a binary form of the subnet mask that you can apply against your IP address block. 11111111.11111111.11111111.11111111 (255.255.255.255) Successive removal of bits from the above: 11111111.11111111.11111111.00000000 Convert the binary subnet mask block back to dot-decimal notation. The value you are left with is the subnet mask which you can then apply over your IP address. 11111111.11111111.11111111.00000000 (255.255.255.0) Apply the subnet mask over your IP address using a bit-wise AND operation. You should then be able to derive the IP address range. 11111111.11111111.11111111.00000000 (255.255.255.0) & 00001010.10001001.00000001.00000000 ( 10.137.1.0) And from this you can derive: Network Prefix 00001010.10001001.00000001.xxxxxxxx (10.137.1.x) Host Portion xxxxxxxx.xxxxxxxx.xxxxxxxx.00000000 (x.x.x.[0-255]) Subnet Range: 10.137.1.0 - 10.137.1.255 Thanks For Reading! That’s it for now. I hope it’s been enjoyable, informative and given you a taster of some of the principals of networking. I leave you with: What do they call a group of network engineers? An outage. …🥁
https://medium.com/swlh/things-you-wanted-to-know-about-networking-b9537c00d3d3
['Andy Macdonald']
2020-06-08 14:10:12.148000+00:00
['Programming', 'Software Engineering', 'Software Development', 'Networking', 'DevOps']
Flipping a Coin With a Quantum Computer
Amazon offers three different quantum computing devices to choose from: Gate-based superconducting qubits from Rigetti Computing Gate-based ion traps from IonQ Quantum annealing from D-Wave The third option, quantum annealing from D-Wave, is a different paradigm of quantum computing that won’t work with the above code. So our choice is between using superconducting qubits or ion traps. I’m going to choose the superconducting qubits from Rigetti since they have a cheaper per-shot rate than IonQ… and because I used to work at Rigetti. In a new cell, copy the following. Here is where you’ll need the S3 folder from earlier because, unlike the free simulator, you don’t want to lose any results from real hardware. If you forgot the name of the S3 folder, you can look it up back in the AWS console under the S3 header. We’re reusing the coin_flip_circuit we created in the last step. If all goes well it should respond with: 'CREATED' Now in a new cell (do not rerun the previous cell since that will enqueue another task!) keep running task.state() to see the status change to 'QUEUED' . What’s happened? Your task hasn’t run on the quantum computer yet because it is waiting in line. Also, some of the quantum computers are only accessible during certain times of the day. It takes a lot of work to maintain properly calibrated quantum computing hardware, so it will only be available sparingly. Check the Devices tab on the Amazon Braket AWS console page to see the schedule for each device. In my case, I was scheduled to wait several hours for the next available slot. If you don’t want to keep my notebook server running for that amount of time, then you can run task.id and make a note of the returned value. Later, you can run the following in a new cell (or even a new notebook) to recover the task: Once task.state() returns 'COMPLETED' then you can access the results just like in the local simulator: Again if all goes well, you’ll see a result like this: Counter({'0': 593, '1': 407}) If you have the patience to repeat this a few times, you’ll likely notice that the ‘0’s seem to dominate the results. This is not a mistake, but more a reflection that today’s quantum computers aren’t perfectly accurate. In our example circuit, this results in more ‘0’s than ‘1’s.
https://stevenheidel.medium.com/flipping-a-coin-with-a-quantum-computer-4c8aec93fa27
['Steven Heidel']
2020-09-02 02:57:24.674000+00:00
['Quantum Computing', 'Programming', 'Software Development', 'Python', 'Technology']
How to Debug a ML Model: A Step-by-Step Case Study in NLP
How to Debug a ML Model: A Step-by-Step Case Study in NLP While there are so many articles out there on how to get started on NLP or teaching you a tutorial, one of the hardest lessons to learn is how to debug a model or task implementation. Mental state before model/task implementation, StockSnap via Pixabay (CC0) Mental state after model/task implementation, Free-Photo via Pixabay (CC0) Not to worry! This article will go through the debugging process of a pretty subtle (and not so subtle) series of bugs, and how we fixed them, with a case study to walk you through the lessons. If you would like to just see a bulleted list of tips, scroll down to the end! In order to do that, let me take you back a few months, to when we (my research collaborator Phu and I) were first implementing masked language modeling into jiant, which is an opensource NLP framework, with the goal of doing multi task training on a RoBERTa model. If this sounds like an alien language to you, I would first suggest you look into this article on transfer learning and multitask learning, and this article about the RoBERTa model. Setting up the Scene Masked language modeling is one of the pretraining objectives in BERT, RoBERTa, and many BERT-style variants. It consists of an input-noising objective, where given a text, the model has to predict 15% of the tokens given the context. To make things harder, these predicted tokens are 80% of the time replaced by “[MASK]”, 10% by another random token, and 10% is the correct, unreplaced token. For example, the model will be shown the below Example of a text changed for MLM training. Here, the model will learn to predict “tail” for the token currently occupied with “[MASK]” Designing the Initial Implementation We first looked into if other people had implemented MLM before, and found the original implementation by Google, and a Pytorch implementation by AllenNLP. We used mostly all of the Huggingface implementation (which has been moved since, since it seems like the file that used to be there no longer exists) for the forward function. Following the RoBERTa paper, we dynamically masked the batch at each time step. Furthermore, Huggingface exposes the pretrained MLM head here, which we utilized as below. Thus, the MLM flow in our code became the below: Load MLM data -> Preprocess and index data -> Load model -> In each step of the model training, we: 1. Dynamically mask batch 2. Compute NLL loss for each masked token The jiant framework uses primarily AllenNLP for vocabulary creation and indexing, as well as instance and dataset management. We first tested with a toy dataset of 100 dataset examples to make sure the loading was correct with AllenNLP. After we went through some pretty explicit bugs, such as some label type mismatch with AllenNLP, we came upon a bigger bug. The First Signs of Trouble After making sure our preprocessing code worked with AllenNLP, we found a strange bug. TypeError: ~ (operator.invert) is only implemented on byte tensors. Traceback (most recent call last): This was because the code we copy-pasted from Huggingface was written with an older version of Python, and in the Pytorch you needed to use .byte() instead of bool() . Thus, we simply changed one line, from indices_replaced = torch.bernoulli(torch.full(labels.shape, 0.8)).bool() & masked_indices to bernoulli_mask = torch.bernoulli(torch.full(labels.shape, 0.8)).to( device=inputs.device, dtype=torch.uint8 ) Trouble Strikes Now, finally, we were able to run a forward function without erroring out! After a few minutes of celebration, we got to work verifying more subtle bugs. We first tested the correctness of our implementation by calling model.eval() and running the model through the MLM forward functions. Since the model, in this case RoBERTa-large, has been pretrained with MLM, we would expect it to do very well on MLM. That was not the case, and we were getting very high losses. It became clear why: the predictions were always 2 off from the gold labels. For example, if the token “tail” was assigned index 25, the label for “dog wagged its [MASK] when it saw the treat” and “[MASK]” would be 25, but the prediction would be 27. We only discovered this after hitting this error. `/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [19,0,0] Assertion `t >= 0 && t < n_classes` failed.` This error meant that the prediction space was larger than the number of classes. After a lot of pdb tracing, we realized that we weren’t using AllenNLP tag/label namespace. In AllenNLP, you can keep track of all the vocabularies you need in an AllenNLP Vocabulary object using namespaces such as the label namespace and input namespace. We found that AllenNLP vocabulary object automatically inserts @@PADDING@@ and @@UNKOWN@@ tokens to index 0 and 1 for all namespaces except for label namespaces (which are all those ending in “_tag” or “_labels.” Since we did not use the label namespace, our indices were being shifted forward by two, and the prediction space (defined by the label vocabulary size) was larger by 2! After finding this out, we renamed the label index and this particular threat was curbed. A Last Hidden Bug, and a pivot By this point, we had thought we had caught all or most of the bugs, and that MLM was working correctly. However, while the model was getting low perplexity now, a week later, while looking through the code with a third person, we found another hidden bug. if self._unk_id is not None: ids = (ids — 2) * valid_mask + self._pad_id * pad_mask + self._unk_id * unk_mask else: ids = (ids — 2) * valid_mask + self._pad_id * pad_mask Somewhere buried in a separate part of the code, which we had written a few months back, we had shifted the inputs of any Transformer-based model back by 2, since AllenNLP shifted it forward by 2. Thus, essentially the model was seeing gibberish, since it was seeing whatever vocabulary tokens were two indices away from the correct inputs’ indices. How did we fix this? We ended up reversing a previous fix for a previous bug and not using the label namespace for inputs, since everything was shifted back by 2 anyhow. and simply making sure that the dynamically generated mask_idx is shifted forward by 2 before being fed into the model. In order to fix the previous error with mismatch between prediction and label space sizes, we made the number of labels the size of the pretrained model’s tokenizer, since that includes all of the vocabulary that that model was pretrained on. After many countless hours debugging and running preliminary experiments, we were finally out of bugs. Phew! So just as a recap of things we did to make sure the code was bug free, and to also recap the types of bugs we saw, here’s a nifty list. Key Takeaways for debugging a model Start testing with a toy dataset. For this case study, preprocessing the entire dataset would’ve taken ~4 hours. Use already created infrastructure if possible. Beware, if you are using other people’s code, make sure you know exactly how it fits into your code and any incompatibilities that may arise, both subtle and not, from integrating outside code into your own. If you’re working with pretrained models, and if it makes sense, try to load a trained model and make sure that it does well in a task. Beware of differences in Pytorch versions (and other versions of dependencies) between code. Be very careful with indexing. Sometimes sketching ut the flow of the indexing is very messy, and can cause a lot of headaches of why your model isn’t performing well. Get other people to look at your code too. You may need to get deeper into the weeds and look at the source codes of packages you are using for preprocessing (such as AllenNLP) in order to understand the source of bugs. Create unit tests to keep track of subtle bugs and to ensure you don’t fall back on those bugs with a code change. And there you have it, a debugging case study and some lessons. We’re all on a journey to get better at debugging together, and I know that I’m far from an expert at model debugging, but hopefully this post was helpful for you! Special thanks to Phu Mon Htut for editing this post. If you’d like to see the final implementation, check it out here!
https://towardsdatascience.com/how-to-debug-an-ml-model-a-step-by-step-case-study-in-nlp-d79d384f7427
['Yada Pruksachatkun']
2020-06-21 01:49:52.022000+00:00
['AI', 'NLP', 'Data Science', 'Technology', 'Machine Learning']
Algorithms Are Making Government Decisions. The Public Needs to Have a Say.
Algorithms Are Making Government Decisions. The Public Needs to Have a Say. Communities need more insight into the government’s use of automated decision systems. Dillon Reisman , Technology Fellow at the AI Now Institute, Meredith Whittaker, Co-founder of the AI Now Institute, & Kate Crawford, Co-founder of the AI Now Institute APRIL 10, 2018 | 10:00 AM AI and automated decision systems are reshaping core social domains, from criminal justice and education, to healthcare and beyond. Yet it remains incredibly difficult to assess and measure the nature and impact of these systems, even as research has shown their potential for biased and inaccurate decisions that harm the most vulnerable. These systems often function in oblique, invisible ways that are not subject to the accountability or oversight the public expects. Consider how a lack of such public oversight hit the New Orleans community. In 2012, the New Orleans Police Department contracted with the data analytics company Palantir to build a state-of-the-art predictive policing system, designed to help the police identify people in the New Orleans community who are likely to commit violence or become the victim of violence. The accuracy and usefulness of such predictive policing and “heat mapping” approaches are very much in question. Recent research has demonstratedthat predictive policing has great potential to disparately impact communities of color, amplifying existing patterns of discrimination in policing. Other research has raised doubts about whether predictive policing is effective at all. This controversial and potentially biased system was put in place with no oversight. Until a report in The Verge last month, even members of the New Orleans City Council had no idea what their own police department was doing. Other jurisdictions are similarly grappling with the lack of oversight over invisible automated systems. Former New York City Council Member James Vacca, sponsor of the legislation forming NYC’s new automated decision system task force, cited his own lack of insight into how the city is using automated decision technologies as reason for drafting the bill. This is why we at AI Now released a report on Monday detailing our proposed accountability framework for “Algorithmic Impact Assessments.” AIAs provide a strong foundation on which oversight and accountability practices can be built, by giving policymakers, stakeholders, and the public the means to understand and govern the AI and automated decision systems used by core government agencies. Algorithmic Impact Assessments would first give the public the basic knowledge it needs through disclosure. Before procuring a new automated decision system, agencies would be required to publicly disclose information on the system’s purpose, reach, and potential impact on legally protected classes of people. Beyond such disclosure, agencies would also be required to provide an accounting of a system’s workings and impact, including any biases or discriminatory behavior the system might perpetuate. Given the many contexts and many types of systems, this would be accomplished not through a one-size-fits-all audit protocol, but by engaging with external researchers and stakeholders, and ensuring that they have meaningful access to an automated decision system. These external researchers must include people from a broad array of disciplines and experience. Take, for example, the Allegheny Family Screening Tool, a tool used in Allegheny County, Pennsylvania to help judge the risk that a child might face abuse or neglect. Researchers with different toolsets have yielded insights on how the tool makes predictions, how employees of the Allegheny Department of Human Service use the tool to make decisions, and how it impacts people subject to those decisions. Finally, agencies would need to honor the public’s right to due process. This means ensuring that meaningful public engagement is integrated into all stages of the AIA process before, during, and after the assessment through a “notice and comment” process, through which agencies solicit public feedback on their assessments. This would be a chance for the public to raise their concerns and, in some cases, even challenge whether an agency should adopt a particular automated decision system. Additionally, if an agency fails to adequately complete an AIA, or if harms go unaddressed by the agency, the public should have some method of recourse. In developing AIA legislation, lawmakers will need to address several points. For example, how should external researchers be funded for their efforts? And what should agencies do when private vendors that sell automated decision systems resist transparency? Our position is that vendors should be required to waive their trade secrecy claims on information required for exercising oversight. The rise of automated decision systems has already had and will continue to have an impact on the most vulnerable people. That’s why communities across the country need far more insight into government’s use of these systems, and far more control over which systems are used to shape their lives. Dillon Reisman is a Technology Fellow at the AI Now Institute. Meredith Whittaker is a co-founder of the AI Now Institute, a Distinguished Research Scientist at New York University, and the founder of Google’s Open Research group. Kate Crawford is a co-founder of the AI Now Institute, Distinguished Research Professor at NYU, a Principal Researcher at Microsoft Research, and a leading scholar of the social implications of data systems, machine learning, and artificial intelligence. This piece is part of a series exploring the impacts of artificial intelligence on civil liberties. The views expressed here do not necessarily reflect the views or positions of the ACLU.
https://medium.com/aclu/algorithms-are-making-government-decisions-the-public-needs-to-have-a-say-59985bdeb690
['Dillon Reisman']
2018-04-16 17:16:05.680000+00:00
['Algorithms', 'Police', 'Artificial Intelligence', 'Ai And Civil Liberties', 'Technology']
How to Leverage AI to Predict (and Prevent) Customer Churn
The problem is, most managers have traditionally taken a retroactive approach to addressing customer churn. They’ll make tweaks, changes, and adjustments, then look back retroactively and conduct a post-mortem as to whether or not those changes were effective. However, with recent advances in Artificial Intelligence (AI) applications, product managers are now capable of better predicting customer churn and take proactive steps to prevent it. The Problem with the Retroactive Approach You’ve created what looks to be a well-functioning product or app with a slick user experience, and have some initial success with user acquisition. But after a while, users start to drop off, customers churn, and you’re not quite sure why. In an effort to reduce churn, designers, developers, and product managers will try a myriad of tactics to solve the problem. They might change colors, tweak fonts, move a pay wall, or alter the User Interface (UI), and then wait 2–3 weeks to gauge whether or not turnover has improved. Based on previous retention benchmarks from weeks prior, they’ll try to figure out which change or changes made the difference. Was it one change, a few, or the sum of changes put together? The disruption of this loop kills workflow, productivity, and overall efficiency in all departments related to improving UX. This A/B approach to testing one or two changes at a time, measuring success, selecting the best option, and moving to the next is slow, cumbersome, and inefficient. Worse, to even consider doing this right, you need to implement one change at a time, and that can take much more time than you have runway for your business. In short, it’s a backward approach that fragments the UX improvement process, and often addresses the root cause of customer churn too little, too late. How AI Can Solve Churn Proactively Our team of data scientists has cráted a better, faster, more efficient approach to solving for customer churn that leverages new advances in machine learning. The real secret sauce to our approach is the way AI is leveraged in a predictive manner. The more you can forecast churn, the better you can prevent it. With machine learning models, you can understand what’s specifically causing churn. Product managers, developers, designers, and executives are spared the guessing games. Prediction The first step is the exploratory phase, where you take a deep dive into the data. Instead of click-stream or purchase data, utilizing Machine Learning allows you to sift through large amounts of data instead of only small sections. If you had a list of 100 users ranked top to bottom in terms of likelihood to churn, for instance, you could analyze clusters to see what kinds of people seem to be represented in the “highly likely to churn” bucket. By uncovering profile attributes such as age, gender, income, campaigns the customers came from and source of the customer, you’ll better be able to predict what kinds of customers are likely to churn (and which won’t). Diagnosis Machine learning can take you most of the way through analyzing data so that an analyst will be able to help the business team understand who’s likely to churn and propose preventative changes in the UI. Through Behavioral Analytics tools, you can segment users by any attribute — behavior, spending levels, age, or cohort and take appropriate action. The diagnostic step is also vital because you can quantify the risk, correct course, and put measures in place to prevent turnover from happening in the future. Action Steps to Leverage AI Now that you’ve effectively leveraged AI to develop predictive models for what kinds of customers are highly likely to churn, here are some specific actions you can take to prevent churn over the lifetime of your business or product: Intervention — One of the best ways to prevent churn is to intervene in the customer lifecycle of profiles that are likely to churn. By triggering an alert to both the user and your internal team, you can focus on taking steps to retain key accounts or even specific individuals. — One of the best ways to prevent churn is to intervene in the customer lifecycle of profiles that are likely to churn. By triggering an alert to both the user and your internal team, you can focus on taking steps to retain key accounts or even specific individuals. Acquisition — Churn isn’t always predicted just based on profile elements. It’s also predicated on acquisition channel (Google Adwords, Social Media, Content Marketing, partner referrals, etc.). Based on your predictive analysis, you can target only the most lucrative users with the best retention and LTV, and, of course, fine-tune your products for these specific customers. — Churn isn’t always predicted just based on profile elements. It’s also predicated on acquisition channel (Google Adwords, Social Media, Content Marketing, partner referrals, etc.). Based on your predictive analysis, you can target only the most lucrative users with the best retention and LTV, and, of course, fine-tune your products for these specific customers. Experience — Color, font, user flow, and other parts of the experience are all things that ultimately impact churn. With AI and behavioral analytics, you now have the tools to know where to focus your efforts on tweaking the user experience. The bottom line is that customer churn can’t be solved retroactively anymore. Companies, brands, and product managers need to take a proactive approach if they want to meaningfully reduce churn rates. By leveraging AI to generate a data-driven, predictive strategy, companies can take the guesswork out of where to focus their efforts for a healthier SaaS business and even gain a competitive advantage.
https://towardsdatascience.com/how-to-leverage-ai-to-predict-and-prevent-customer-churn-f84d653a76fb
['Dan Schoenbaum']
2018-04-02 22:29:12.154000+00:00
['Growth Hacking', 'AI', 'Conversion Optimization', 'SaaS', 'Machine Learning']
Inside the Social Media Cult That Convinces Young People to Give Up Everything
Inside the Social Media Cult That Convinces Young People to Give Up Everything The DayLife Army always seemed like a troll. Then it became a nightmare. It started with a tweet. In the fall of 2013, Matthew had recently turned 18 and was just finishing up his first semester at a college in Chicago. A freshman music business student with a happy-go-lucky laugh and a fascination with the internet, he’d been making electronic tracks since high school and wanted to learn how to market his work. Matthew came to college dreaming of creating his own “multidimensional content brand,” combining the positivity-obsessed social media hijinks of his favorite alt-lit writers with the proudly Web 2.0 memes circulating in the net art scene. He’d had a productive first semester, putting the finishing touches on an upcoming album, setting up a website, and rolling out a few music videos centered around the idea of ushering in a better world. He’d even printed out a set of business cards. Now all he needed was fans. On November 23, after posting the album’s first single, Matthew noticed a comment in his Twitter mentions praising the track from a woman he’d later learn was named KoA Malone. Around the same time, a man named Eben “Wiz-EL” Carlson started replying to his tweets too, offering bits of advice on how he was packaging the project. As he began tweeting back and forth with them, he learned that KoA and Wiz-EL were partners. In photos, he noticed that they only wore white. The sudden flurry of interest was strange — Wiz-EL would sometimes reply to one of Matthew’s posts from several different Twitter accounts — but Matthew was flattered that two strangers seemed to be taking his music seriously. “No one was paying attention to me online,” says Matthew. “And suddenly he starts responding to every single one with seven or eight responses.” Wiz-EL seemed to intuit Matthew’s obsession with branding. “Any tweet that I felt was kind of corny or stupid, he would insult me, make fun of me, or criticize me. Anything that I did that I already thought was good, he would praise me, building me up. I thought, ‘They’re paying more attention to me than I am.’” Twitter replies evolved into DM conversations, then phone calls. As it turned out, Wiz-EL and KoA also lived in Chicago. Wiz-EL had the sort of credentials Matthew was looking for in a mentor at the time: Forty-six years old with salt-and-pepper hair, he said he came up in the ’90s Seattle rock scene, where he claimed he rubbed elbows with grunge legends. He seemed to have an encyclopedic knowledge of counterculture, art, and different religious traditions. Wiz-EL was critical of Matthew’s brand. Matthew remembers Wiz-EL, who is white, quipping that “you can’t be a white dude who talks about saving the world,” but he also encouraged Matthew to think bigger. “He was basically spelling out the history of things as he saw it, implying that I was in a unique position to do something that was revolutionary,” Matthew says. He encouraged Matthew to begin releasing music under a new artistic pseudonym: Buum. Matthew looked up to KoA too: Née Kimberly Laura Malone and 10 years Wiz-EL’s junior, she’d run a beloved cocktail bar in Tacoma, Washington, and often spoke about coming from a family with deep ties to the music industry. One of her brothers was Kyp Malone, who plays in the popular indie rock band TV on the Radio; another was influential Los Angeles DJ Total Freedom, a regular guest at the zeitgeist-defining queer and POC-focused party GHE20G0TH1K. Matthew says the couple finally invited him over to their penthouse apartment, located in a luxury building on Lake Shore Drive, during the summer after his freshman year. Stepping inside, Matthew remembers being struck by the dissonance of the place: The apartment was impressive, he says, with floor-to-ceiling windows and a view of Lake Michigan — but it was basically unfurnished, as though they’d never really settled in. Though they seemed to Matthew to be strapped for cash, KoA and Wiz-EL were brimming with business ideas, from a plant-based healing company specializing in legal cannabis to a lifestyle brand called Fuxy. They seemed excited about the future: Conversations with Wiz-EL and KoA invariably circled back to the power of aspirational visualization as a means of “manifesting” one’s dreams. Matthew was eager to learn more about the set of spiritual principles they were developing. During the visit, Wiz-EL AirDropped a collection of images to Matthew’s phone from a white iPad: fantasy brand logos, fashion photos, expensive properties, movie stills. Matthew left the apartment with a paper bag full of folded white designer clothes. The meeting was a step into a rabbit hole that would derail Matthew’s life. During Matthew’s sophomore year, Wiz-EL began posting about a spiritual community he’d started called Tumple. Matthew, who requested we only use his first name for this piece and went by the name “Buum” during his time in the organization, would become Tumple’s first recruit. A 2016 Daily Dot article about Tumple, “Inside the magic sex cult recruiting from Facebook meme pages,” left the group open to interpretation: Was it a new religious movement? An art project? The group’s leadership, which openly described the organization as a “cult,” communicated in a language they had dubbed “Unglish,” substituting the letters U and Y for random vowels. It was an attempt, they claimed, to subvert the Western written tradition, which they associated with whiteness and maleness. KoA, who is Black, explained to the Daily Dot that the idea was to dismantle the “white methodology” that governed modern capitalist society and replace it with “a new foundation, the Black pleasure foundation.” Marrying a distinctly internet-centric sensibility with KoA’s knowledge of African diasporic spirituality and Wiz-EL’s interest in Gnostic Christianity, the group outlined a lifestyle that prioritized pleasure, anti-racist education, sober living, and a suite of mystical sex practices, which members could learn through a course called “Pearl Divun,” which cost $2,000 per month. For $1,000 per month, adherents could join a program Wiz-EL described as “lazy visualizatiun” and “like Tumblr.” By the time of the Daily Dot article, Tumple had reeled in a dozen members along with at least 50 “orbiters,” or more casual followers. The vast majority of these individuals only interacted with the organization over the internet. There was certainly something utopian about the project — a millennial-focused religious movement rooted in themes of anti-racism and economic justice and tailored to life online. Wiz-EL and KoA painted a picture of a world where users would own the capital generated from their own content, and followers could earn money simply for being themselves on the internet. But in practice, Tumple would resemble a kind of social media pyramid scheme, one that mostly targeted young artists and musicians: Members would create promotional Facebook posts, Tumple-related videos, Facebook group chats, and other content, generating income through PayPal donations while delivering a cut of their earnings back to Wiz-EL and KoA.
https://onezero.medium.com/inside-the-social-media-cult-that-convinces-young-people-to-give-up-everything-f3878fbec632
['Emilie Friedlander', 'Joy Crane']
2020-06-25 14:45:02.018000+00:00
['Cult', 'Culture', 'Facebook', 'Technology', 'Social Media']
Why Great Jones Co-Founder Sierra Tishgart Makes a Habit of Cooking at Home
Why Great Jones Co-Founder Sierra Tishgart Makes a Habit of Cooking at Home “I think something that really liberated me to cook more at home was learning that it doesn’t always have to be this perfect endeavor.” There are many ways to live a healthy life. The Health Diaries is a weekly series about the habits that keep notable people living well. Within a year, Sierra Tishgart went from working as a senior food editor for New York magazine’s Grub Street to co-founding a cookware company called Great Jones. The idea for the company started with a basic problem: Tishgart needed to buy some new cookware that would last and look nice, but the choices were overwhelming and expensive. She and her co-founder, Maddy Moelis, thought they could simplify the process, and they did just that. The pair made Forbes’ 30 Under 30 list for their millennial-focused line of pots and pans that cost less than $145 apiece, and Great Jones now competes directly with leading cookware manufacturers like Le Creuset. This week, Tishgart talks with Elemental about her puppy Hubble, her obsession with green tea, that one time she fell asleep in the middle of her own dinner party, and the joy of cooking at home even when your schedule is busy. I wake up pretty naturally every morning around 8 a.m. I’m a night owl and I do a lot of work late, so 8 a.m. is probably as early as I ever want to get up. I have a dog who’s about a year old, so my morning routine very much revolves around him. His name is Hubble. We’ll go play a little fetch in my backyard. I’m very fortunate to have a garden in New York; as much as my life can revolve around being out there in the morning, it does. My constant every morning is that I have green tea. I’ve always been an avid tea drinker, although I do drink coffee occasionally. I have no routine in relation to what I eat in the mornings. As I’ve gotten busier with the company, some days I’ll just grab a pastry. Some days I go to Daily Provisions, which is a little café near our office; I get smoked salmon on an English muffin. Sometimes we cook in the office, or sometimes there’s a frittata I’ve made the night before. It’s very random and it’s usually a later breakfast while I’m in transit. I worked as a food editor for many years, so food is an important part of my life; it’s something that I count on to bring me joy every day. I like to make sure that whenever possible, the thing I’m eating is the most beautiful version of itself. I define that as something made fresh and not processed. I especially love desserts and sweets. I feel fortunate to have enough disposable income to go to a really beautiful bakery where the experience is warm and inviting, and where the food is fresh. I’ve dabbled in supplements, but I’ve never ever really stuck to anything. I tried! Like even now, I have Ritual vitamins on my desk. They seem nice. But I’ve never been able to get them into my routine. When you own your own business like Maddy and I do, you’re always working. I typically have my butt in my seat at the office by 10 a.m., which is a little bit on the later side. But as I said before, I often work really late into the night so I like to give myself the flexibility to have a cup of tea into the morning, take my dog for a long walk, get a little bit of breakfast, and maybe take a call as I walk. I used to be a writer so my day was very nomadic. Now my days involve a lot of meetings and check-ins with people in our team. And a lot of emails, which is not the most sexy and exciting thing to talk about. The nice thing is that we’re wearing so many hats. My mind goes from product design to marketing to event planning, all within the span of an hour. Some days I’m sitting in the office and other days I’m running around and helping spread the word about our company. We want people to take care of themselves, go home at night, and really turn off, so our office clears out around 6 p.m. I’ll usually take a break then and go get my dog from home. We’ll go for a walk and I’ll think about dinner. Then around 8 or 9 p.m., I sit back down at my computer for another couple hours of work. That’s usually my quiet time, especially if I’ve been in meetings all day. That’s when I do my best creative thinking. Exercise is something I would like to do more of. As our lives became more chaotic with Great Jones, exercise was the first thing to go. At the moment, my main form of exercise is walking and it’s also my favorite. I walk to and from work and that’s a little over a mile each way, so I’m at least putting in two or three miles per day. If I have plans or meetings, I also walk to those. Usually I look down at my phone and I’ve put in about five to 10 miles at the end of every day. New York is very competitive when it comes to exercise, but that’s not really my thing. I enjoy walking and think it’s a really lovely way to see the city and be efficient with my time. I’ll also intersperse the occasional hot yoga class in there. Recently, our team has been cooking together in the office, and that’s a habit I want to keep. We have an office with a kitchen and it’s just wonderful to think about cooking as part of our culture. Similarly, that’s a thing I’ve been trying to do more of at home, even when I’m super tired. There are a lot of barriers to cooking, one of which is having the right pots and pans, which is why we started Great Jones. But I also try to make sure I set myself up for success on a Sunday. I try to make a big thing of rice and I keep vegetables and tofu in the fridge so I can make something later in the week. The other day I canceled my plans because I was tired, and I was thinking about ordering something for dinner. But then I realized I had done a great grocery run. I had pasta, shrimp, and fava beans, so I threw something together. It was fun and I always feel really good after I cook like that. I try to make sure there’s some kind of home-cooked meal as part of every day; it feels really important to my well-being. When it comes to work-life balance, my work is my life right now. I see that as a privileged position; we take really seriously that people’s employment depends on us. That’s just not something I’m able to turn off. I’ve also been thinking a lot about phone-life balance. I think I had a normal degree of attachment to my phone before starting Great Jones. But now, any time I pull out my phone, there are work emails or other things I need to look at. It’s very hard not to feel tied to my phone at all times. I’m trying to see how to detach from that, but I don’t have it figured out yet. Phone time is not creative time. Before bed, I try to put down my phone and read. Ideally that’s fiction, or something not work-related. But I also just fall asleep right away these days because I’m so exhausted. I’ll fall asleep with the lights on! I had a dinner party a little while ago and I fell asleep in the middle of it on my couch. It’s a joke with all of my friends now. I tie health less to the physical, like how I look or what I ate that week, and more to how I’m feeling about myself. When I feel calm and confident in myself, I feel healthy. I’m excited about a few new things at Great Jones. First, we recently launched something called Potline, which is a texting service where a real person from our team answers real-time questions to help you figure out what to make for dinner. It’s recipe inspiration and cooking guidance for when you’re walking home, you’re tired, and it’s a Monday night. You think: Should I just get a slice of pizza? But hopefully this removes a few barriers and we can help you figure out what to do with that chicken, rice, and sausage you have in the freezer. We also have a lot of launches coming up in 2020. We recently launched our Dutchess, which is our Dutch oven in black and white. For a brand very much known for color, this was a departure. But people kept requesting it! I love this work because I feel like I have the wonderful privilege of encouraging home cooking and hopefully making it feel more accessible. I think something that really liberated me to cook more at home was learning that it doesn’t always have to be this perfect endeavor. It doesn’t always have to be a measured-out recipe with set ingredients. When you strip away some of that pressure, I feel way better. Plus, I think cooking is so important for health, whether that’s the way you physically feel or the way you’re connecting to yourself or your community. It’s really fun to see people inspired by what we do here at Great Jones. If you’re sitting on a big idea, my advice is to talk to anyone and everyone about it. The instinct is to keep it a secret. I definitely felt this way. But I think a lot of our success at Great Jones came because we looked to our network, and our friends of friends of friends, and even total strangers. They would sit down with us and we’d ask for their time and energy. Also, if I thought back on the last year, I would have said, “Oh my God, we can never do all of this.” But you just take it one step at a time. If you ask for help, you’ll figure out how to get through it.
https://elemental.medium.com/why-great-jones-co-founder-sierra-tishgart-makes-a-habit-of-cooking-at-home-5db4b0d08732
['Jenni Gritters']
2019-07-02 16:27:24.543000+00:00
['The Health Diarie', 'Great Jones', 'Health', 'Life', 'Cooking']
Five risks of news personalization
1. Reinforcement of filter bubbles The idea that algorithms, might reinforce the echo-chamber effect, particularly for the distribution of news on social media, is now a wide-spread public discussion. Bias bubbles, filter bubbles, information blindness, preference bubbles, are all different ways to describe the same phenomenon: people get what they want, but are less exposed to other opinions and less and less confronted with facts they’re not interested in. By using the same click-driven recommendation engines on their platforms, media outlets could reproduce this toxic atmosphere, just on a smaller scale than social media or news aggregators. This risk shouldn’t go unheeded. But unlike Facebook, Twitter or aggregators like Google News and Nuzzel, media organizations have a deep understanding of the content they produce. Most news organizations aim to deliver a balanced representation of the truth. For media with a strong fact-based journalism culture, there is only a residual risk that they could produce and spread misinformation. Nevertheless, the constant questioning of the fairness and accuracy of their own reporting, and a clear, transparent and humble attitude towards critiques, are even more important in this new world. That is, for general news. The impact of personalization becomes trickier for opinion pieces or analysis. In these fields, there is a reasonable risk that users would only get points of view that conform to what they already think. Worse, they might be driven to even more extreme opinions. I see two serious objections to this concern. First, filter bubbles have always existed. For example, my 81-year-old aunt, who has strong liberal values, is not on social media. But she has read The New York Times every day for more than 40 years and watches MSNBC. She wouldn’t read The Wall Street Journal, The Economist or The Washington Post… and if she knew how to do it, she would certainly delete Fox News from her TV channel list. She lives in a comfy liberal filter bubble. Second, media organizations are now able to correctly label opinion and analysis pieces, and some have agreed on a new machine readable standard, led by the Trust Project. Rather than reinforcing their bubbles, understanding readers’ individual preferences could be an opportunity to create new ways to explore opinions in the news. For example, by letting the reader flip from a liberal point of view to a conservative point of view. 2. Closing people into a tiny box Narrowly tailoring the feed on people’s individual preferences would imprison them in a tiny space, with no exposure to what’s outside. The joy of a surprise, an unexpected discovery, has always been part of the media experience. In a linear consumption mode — when people went from page 1 to 64 of a newspaper or magazine, or from minute 1 to 26 of a news show, the audience was inevitably exposed to many unexpected pieces of content. In the digital world, especially in a personalized setting, it is important to keep flexible boundaries in the curation process, in order to let what’s outside of each user’s personal preferences nourish the whole experience. “Don’t over-personalize”, said Bethany Ostecchini, director of Time Inc. UK’s beauty website, This is Powder, in an interview with Reuters Community. “It is very tempting to stretch the system to its limits. For example, I would only see content with a 30-something woman with very dark hair, green eyes and medium skin. But actually, I would like to see women of all ethnicities. Sometimes, I want to see products that may be outside of my budget or under the budget that I decide. It’s about aspiration. Getting that right is really important.” This spirit of maintaining a certain porosity should be applied to news apps. Their interfaces should also keep space for spontaneous, linear discoveries.The simplest version of that would be a simple timeline compiling the most recent news pieces. 3. When what interests the public overruns the public interest Can algorithms detect what is in the public interest? Or are they only good at finding what interests the public? Up until now, most recommendation engines have been totally incapable of sorting news by importance. But the crowd is not always the wisest editor. To correct this effect, some recommendation engines lean on the original position in the feed — if something was manually placed at the top of the homepage, it is flagged as “important”. But that’s it. Human editors, on the contrary, are trained to quickly weigh the relevance of a constant flow of news. This is not an exact science and what is not super relevant for one outlet, could be very important for another one. These nuances, depending on context, are precisely what make it so hard to automate. Controversial content often has a higher share rate or bigger number of comments. That’s why clicks, shares and reading time shouldn’t be the only variables for algorithms to rate news pieces. To prevent the risk that the interests of the public might overrun the public interest, the next generation of recommendations should integer human crafted metadata about the importance of a piece, it’s longevity (“ever-greenness”), and its geographic relevance, etc. This could be generated on the content production side. News organizations could add this information for a reasonably low cost by using their existing production channel. They could use this metadata to competitive advantage, to improve the user experience on their own platforms. Third party news aggregators would not get this metadata, and could never reach the same level of relevance. Metadata about the quality of the content could also be crowdsourced from the audience side, with specific questions. At the end of each article, the Swiss news organization Tamedia, which I work for, asks a simple question: Was this article worth reading? Other questions like, “Do you feel well informed?”, “Is this important to know?”, “Do you feel happier?”, “Is this inspiring?” would probably also be good questions in certain contexts. The quality of the conversation that an article drives could eventually be measured. 4. Disappearance of common reference points With the exponential multiplication of TV channels and the explosion of linear TV in the era of streamed video content, television has become a much less common experience across society. Print news was always more stratified, but it seemed to be more stable in the digital world. We could all still read the same front page article and talk about it. Now, what happens if, because of personalization, I see a totally different set of content than my neighbor or my friends? How could we have common ground for a conversation? If people have similar demographics and reading behavior, they’ll probably get mostly similar content. But what if they are quite different? Worse, what if the content itself is also personalized? How could somebody even share this content? In a Stanford class, with an interdisciplinary group of students, we imagined a solution to tackle that challenge. Each different version of an article would have a unique identifier, and there would be transparency about the fact that the article is just one of many versions. In a scenario of a legal issue about an article, this raises many unanswered questions. What is the liability of a publisher if one of many versions contains a defamatory paragraph? How would a judge assess the damage? Or imagine a situation where the system would show an uncensored version of an investigative piece to anybody, except to the lawyers of the accused person, who would get a legally bullet-proof version. This would of course be totally unfair. If an error were made, who should see the erratum or the right of reply? Only those who clicked on the original article? Or everyone? To prepare themselves for all the situations that will inevitably occur, media organizations should build a system where transparency and traceability are key. Wikipedia has proven that it is possible to track every single version of a text. 5. Loss of privacy in media consumption On the other hand, the user’s privacy must be guaranteed. The European Union’s General Data Protection Regulation (GDPR) requires clear consent and justification for any personal data collected from users. The way news organizations explain and give their customers access to the data they store about them should be exemplary. A user dashboard should also give the user full control of what is stored. In some situations, the simple act of reading a particular article can be a threat. If someone reads the same article of an unsolved murder many times, and the police learn this from the data, would he become a suspect? Should media organizations cooperate with the authorities? And what about political activism? In order to keep the user’s trust, as little personal data as possible should be stored. These issues could be solved by establishing differential privacy, or the ability to delete, or randomized particular entries without any logs.
https://medium.com/jsk-class-of-2018/five-risks-of-news-personalizations-5bdc97fdbdcc
['Titus Plattner']
2018-06-13 00:04:15.650000+00:00
['Algorithms', 'Innovation', 'Media Ethics', 'Personalization', 'Journalism']
Visualizing Crime against women in India on a Map using Geopandas
Visualizing Crime against women in India on a Map using Geopandas This article shows us a simple way to plot either state-wise or district-wise statistical data (like the one used here i.e. crime against women in India) on the Indian Map(or you can choose any country) using GeoPandas: A Python Library. Image Courtesy: Reyna Zamora , [Pinterest Post]. Retrieved Feb 18, 2020, from https://in.pinterest.com/pin/626563366894685267/ Introduction: Hello everyone, this article demonstrates how to plot the data for crimes against women on a choropleth map of India with respect to each state. According to Google Dictionary: “A choropleth map is a map which uses differences in shading, colouring, or the placing of symbols within predefined areas to indicate the average values of a particular quantity in those areas.” Geopandas: Geopandas is a library that can be used to create choropleth maps with not too many lines of code! It can read files as dataframes which are usually called Geodataframes. Geodataframes are a lot like Pandas dataframes, so the two usually play along nicely. More about Geopandas https://geopandas.org/ Shape Files: A shapefile is a simple, nontopological format for storing the geometric location and attribute information of geographic features. Geographic features in a shapefile can be represented by points, lines, or polygons (areas). The shape files used in this article to plot the India map with state boundaries can be downloaded from this link while that with the district boundaries can be downloaded from this link. Installation using Pip: You can installl the Geopandas library using pip as shown below: pip install geopandas pip install descartes The library descartes is a dependency for geopandas which has to be installed explicitly, so don’t forget to install it too. Implementation: We explain the entire implementation in Python in 5 simple steps Suppose we have an excel file named state_wise_crimes.xlsx containing all the India State names along with the Total Crime against women as shown below: +-------------------+----------------------------+ | state_name | Total Crimes against Women | +-------------------+----------------------------+ | Andhra Pradesh | 15931.0 | | Arunachal Pradesh | 384.0 | | Assam | 23258.0 | | Bihar | 13891.0 | | Chhattisgarh | 5720.0 | | ..... | ..... | +-------------------+----------------------------+ We follow the following steps to create the chloropeth map Step 1: Read the excel file into a pandas dataframe: df = pd.read_excel('state_wise_crimes.xlsx') Step 2: Read the Indian map shapefile with district boundaries in a Geodataframe: fp = "Igismap/Indian_States.shp" map_df = gpd.read_file(fp) Step3: Join both dataframes by state names: merged = map_df.set_index('st_nm').join(data_for_map.set_index('state_name')) merged.head() Step4: Create figure and axes for Matplotlib and set the title fig, ax = plt.subplots(1, figsize=(10, 6)) ax.axis('off') ax.set_title('State Wise Crime against women in India in 2015', fontdict={'fontsize': '25', 'fontweight' : '3'}) Step6: Finally, plot the chloropeth map merged.plot(column='Total Crimes against Women', cmap='YlOrRd', linewidth=0.8, ax=ax, edgecolor='0.8', legend=True) Output: The output should look like this You can also save your output as an image using the code snippet shown below: fig.savefig("State_wise.png", dpi=100) Similarly we can plot the district wise chloropeth map using its shape file mentioned in the Shape Files section. You can download the excel file containing the district wise crimes against women from this link. The output for which looks like: We can do the same for any single state too, just like I did for Maharashtra. For this purpose we can use the same shape files that were used for district wise visualization by adding a single line: map_df = map_df[map_df['NAME_1']=='Maharashtra'] The output looks like: Github link: You can get the entire code here: Summary: In this article we saw how to use GeoPandas library to plot chloropeth maps of India to describe the state wise and district wise crimes against women in 2015. With appropriate data sets and shape files, we can do the same thing for any country or state in it. References:
https://towardsdatascience.com/visualizing-map-of-crime-against-women-in-india-using-geopandas-2d31af1a369b
['Yaser Sakkaf']
2020-03-31 10:01:22.257000+00:00
['Geopandas', 'Data Science', 'Data Visualization', 'Python']
Interview with Castro Antwi-Danso of Esoko
Castro Antwi-Danso, Esoko “What I’d like to get out of a Network of Data Stewards is to see the best practices from other places that could inform what we do, especially in Africa where data use and management isn’t very enhanced.” In this intervew recorded at the Data Stewards Network Camp in Cape Town, South Africa, Castro Antwi-Danso, the director of sales and marketing at Esoko shares insights on the opportunities, risks, and best practices of stewarding private-sector data in the public interest. Drawing from his experience helping Esoko to leverage its agricultural data to create positive impacts in rural communities across Africa, Castro reflects on the potential for data collaboration to help minimize duplicative data collection, and the important role of data stewards in establishing and maintaining trust with data subjects and users.
https://medium.com/data-stewards-network/interview-with-castro-antwi-danso-of-esoko-d7b57e6828de
['Andrew Young']
2019-01-02 19:07:41.052000+00:00
['Interviews', 'Data Stewards', 'Big Data', 'Data Stewardship', 'Data']
The weighty burden of NCDs in Nigeria: Time to Act
Cancer is one of the four major non-communicable diseases. Photo source: Project Pink Blue Editor’s Note: This week’s Thought Leadership Piece comes from Dr. Mamsallah Faal-Omisore, a primary-health physician with qualifications in global health policy, infectious diseases and maternal and child health. This week the third United Nations High-level Meeting on Non-Communicable Diseases (NCDs) holds in New York, to help set the stage for global health conversations about tackling these group of killer diseases. Faal-Omisore argues the case for an Agency in Nigeria dedicated to Non-Communicable Diseases given the burden in Nigeria. A child was born; life in the womb was tough, not enough nutrients came through to encourage good growth and Naya emerged into the world underweight and possibly underdeveloped. Feeble cries were a hallmark of her childhood and food was neither sufficient; nor of the right quality. Naya just made it to school age, suffering several bouts of malaria along the way. Sadly, Naya lost her mum at 13 from a stroke and she had to move with her brothers to the city to live with their uncle. The next few years were spent at school, helping in the family shop and caring for her siblings and cousins. On reaching 21, she left home to live with Isaiah — a 25-year-old mechanic, popularly called a ‘vulcaniser,’ who eventually became her common-law husband and father of her two children. Both her pregnancies were difficult; she had high blood pressure and had to be induced. Fortunately, she had good care at the Federal Medical Centre and both children were born healthy. By the time she was 27, Naya was a mother of a 5 and 4-year-old, had started her own business and was doing quite well. She was happy to provide well for herself and her children who went to good schools and lived in a self-contained apartment. Naya made up for her early years of food deprivation, eating plentifully and well. She ballooned to 107 kilograms, quite a lot of weight for her 160cm height. Over the years, she struggled to get about easily and increasingly relied on her assistants to run the business. At 41 years old, whilst making her way to the shop, she collapsed and died from a stroke. Image credit: Nigeria Health Watch The impact of Non-Communicable Diseases (NCDs) on Nigerians Naya’s untimely death is unfortunately a daily occurrence in Nigeria. People diagnosed with any of the four major non-communicable diseases; cardiovascular diseases (stroke and heart disease), diabetes mellitus, cancer and chronic lung diseases are on the rise. WHO estimates that deaths from noncommunicable diseases (NCDs) are likely to increase globally by 17% over the next 10 years, and the African region will experience a 27% increase, that is 28 million additional deaths from these conditions which are projected to exceed deaths due to communicable, maternal, perinatal and nutritional diseases combined by 2030. The costs on society will therefore be staggering; the burden on the health system will be increasingly unsustainable, the loss from the workforce of adults in their prime will have profound effects on the fiscus and families and communities will struggle to make ends meet. All this of course likely to be made worse by a rising population projected to be the third highest in the world by 2050. Reversal of this trend is possible and existential. Eighty percent of the risk of developing an NCD is related to lifestyle choices; WHO has identified four areas that if addressed will reduce the number of NCD sufferers. These include focusing on physical inactivity, unhealthy diets, harmful use of alcohol and tobacco consumption. In addition, WHO has also been able to quantify through return on investment analyses, the best value-for-money approaches that governments can take to reduce NCDs in the population. So, clearly there are remedies that can be taken but what, when and how these should happen in a coherent way remain elusive and unnecessarily so. In fact, the wide-ranging determinants and outcomes involved suggest that adequately tackling NCDs is imperative, if we are to face this challenge and win. Currently, we have an NCD division within the Federal Ministry of Health which has developed a national strategic plan of action on prevention and control of NCDs — a good start, but more is needed, desperately more, to avoid a catastrophic situation. What could more robust anti-NCDs action look like? As mentioned earlier, acquiring an NCD relates to lifestyle choices which in turn are influenced by social norms, the policy environment and poverty challenges. For example, our sociocultural constructs promote weight gain and unhealthy diets in individuals as a sign of affluence, as was the case with Naya. There is very little public health education on the right dietary choices to make nor the importance of physical activity to counter some of these beliefs. Commercial interests rather than health concerns dictate trade and food/agricultural policy, and food safety standards are inconsistently enforced. And, with growing poverty rates, affordability and accessibility to the right quality of food will become unachievable. Image credit: Nigeria Health Watch This clearly illustrates that NCD action extends beyond the health sector because of the diversity of influential factors and consequences and as such innovate and sustainable solutions are necessary. In fact, an operation that is so influential it leads to a paradigm shift in our national NCD discourse should be the aim; making real and tangible differences to Nigerian lives daily. Such an entity can be a parastatal mandated by government as is the case with the NCD commissions in Caribbean countries or part of the remit of an existing public health body like in India or the UK. Regardless of its structure, fulfilling the assignment of changing the trajectory on NCDs will be pivotal to its success. It should be defined by being empowered and enabled to perform independent and oversight functions that are built on the following pillars: Policy: Fulfill a governance role by informing as well as assessing the impact of government policy formulation on NCD risk; making an investment case for prioritising NCDs across all sectors; pushing for regulation that promote policies for instance with food and beverages which reduce NCD risk; and ensuring that national strategies emphasise intersectoral collaboration in alignment with international discourse on the best methods to tackle NCDs. Health system design: Mandated to understand the population effect of NCDs through data collection of disease and risk factor burden; set local and national targets that endorse standards for improvement in NCD care delivery; and stimulate robust health system design that has integrated people-centred care at its heart. Advocacy: Calling for adequate and accessible health financing to prevent and avoid the catastrophic costs associated with an NCD diagnosis will be instrumental. Backing behaviour change campaigns built on a strong health literacy, prevention and promotion ethos will be the game-changer and potentially lead to sustainable changes in lifestyle across the lifetime of every individual. Knowledge: The generation and dissemination of organisational outputs publicly through exemplary monitoring and evaluation activities is central to embedding social trust. This will encourage all stakeholders to have a sense of ownership in performance and emphasise its worth as an entity with a powerful social capital philosophy. Image credit: Nigeria Health Watch Tomorrow is built today Would more visible national NCD activity have made a difference to Naya? Probably. There were missed opportunities for change throughout her life; a healthier mother, a healthier lifestyle, regular health checks, a healthier environment for physical activity and so on. Ultimately, Naya’s chance for a longer, healthier life boiled down to being better informed and being empowered to make the right choices. As a growing nation, we can no longer leave our most prized assets — our human capital — to the whims and vagaries of luck and chance for survival. To tackle NCDS, a ‘whole-of-society’ plus a ‘whole-of-government’ approach is fundamental. Negotiating these changes starts with recognising that we do need to urgently and effectively coordinate a multi-pronged and targeted approach encompassing socio-economic, cultural and political influencers thereby assuring a sustainable reversal in outcomes. It must also set us on the path to join global efforts of meeting a substantial proportion of the SDGs especially SDG 3.4: Reducing by a third the number of premature deaths from NCDs by 2030… There are only 12 more years to go! Guest Columnist Bio: Mamsallah Faal-Omisore is a primary-health physician with qualifications in global health policy, infectious diseases and maternal and child health. She has a diverse portfolio of activity centred around the promotion of quality health care in resource-limited settings. Mamsallah is also a clinical team member of Primary Care International — an NGO that provides strategic and capacity building support to governments and institutions in LMICs to strengthen health systems particularly in NCD care. In addition, she is a member of the working group on citizens, parents and children of the NGO: Health Information for All as well as a member of the special interest group on NCDs for the global family doctor organisation — WONCA. Her activities in the policy development space have included developing NCD strategies for pharmaceutical companies, advising organisations on employee health risk assessments for NCDs and supporting initiatives such as the development of a health literacy foundation targeted at children as change agents for NCD prevention action in communities.
https://nigeriahealthwatch.medium.com/the-weighty-burden-of-ncds-in-nigeria-time-to-act-9a47fc4a3a27
['Nigeria Health Watch']
2018-09-27 07:49:15.924000+00:00
['Nigeria', 'Cancer', 'Health', 'Ncds', 'Diseases']
The 6 Levels of Frontend Development Automation
Level 0 (No Automation) — Engineer does everything This is the process that occurs when engineers receive static mockups, without any additional data. No red-lines, no generated CSS properties, or anything other than a mockup image that represents the end result. Engineers look at the mockup image and re-create everything with code. Since static images can only convey some information, much of it is guess-work. Even font sizes need to be guessed, as there is no accompanying data that provides them. Level 1 (Engineer Assistance)— Automated system can sometimes assist the engineer by providing the styling of the front-end code. The engineer is provided with an interactive webpage that includes red-lines and styling code snippets (such as CSS, SASS or LESS). There are plenty of tools that provide this level of automation. Amongst them are Zeplin, Avocode, InVision Inspect, Sketch Measure, and others. The generated CSS is readable both by humans and devices. It can be copy-pasted or used a reference. This level of automation saves typing errors and a bit of time. It is still partial since it can only automate Styling (part one out of four). Level 2 (Partial Automation)— Automated system can generate a responsive layout of the front-end code. The system produces code for laying out the interface. This means Styling, and Layout (DOM). The code can be HTML, React, Swift, Java (for Android), React Native, Flutter, or any other front-end language. For the first time, the code can run on a device (such as a browser) and display a pixel perfect interface which is identical to the original mockup. Level 3 (Conditional Automation)— Automated system can generate interactive parts of the front-end code. The system produces code for most parts (all except Naming). This means Styling, Layout (DOM), and Interactivity. The interface is interactive and animated. It’s no longer static but can have micro-interactions, animations, states, and transitions. Level 4 — (High Automation) —Automated system can generate all parts of the front-end code. All 4 parts of front-end are included: Styling, Layout, Interactivity and Semantic Naming. The code should be full, clear and maintainable by a human engineer. Complete components can be used as-is or as a reference for the engineer. The code is readable both to humans and devices. Semantic Naming means that elements are named based on what they are, rather than based on accompanying data. In the following wireframe, try to guess the name of the pointed element. When you’re finished, scroll down. Can you come up with a good name for this element? In wireframe above, a human engineer would, most likely, deduce that the element is a “Profile Picture”. Notice that it doesn’t say “Profile Picture” anywhere, but since as humans have seen many profile pages, we are trained to perform pattern matching to accomplish this. Machines can learn this as well. Level 5 — (Full Automation)—Automated system can generate all of the front-end code in human-level. Automated system can generate code for all design specs, whereas the generated code is indistinguishable from a human-written code. In this level, if a code review is performed, it should pass a “Turing Test” in which the reviewer can’t tell if an automated system produced the code or a human engineer.
https://medium.com/hackernoon/the-6-levels-of-front-end-development-automation-f6f93a24b7cd
['Or Arbel']
2019-05-24 10:57:40.647000+00:00
['Frontend Development', 'AI', 'Frontend Automation', 'CSS', 'Machine Learning']
Read the anti-agile manifesto
I read lots of books and saw countless speeches where the agile manifesto has been lifted into the heights of sacred texts, where the agile manifesto is considered to be the foundation and cornerstone of a new religion. The manifesto shows us that we were not meant to live in suffering and develop software like animals, and achieving a brighter and better way of software development is possible when you work hard, study hard and believe hard enough. I also believe in the principles of the manifesto, but I always thought that it does not help too much to solve my actual problems, it’s just mostly (or at least should be) common sense. It sets the goals right but it does not actually help you to understand where are you now and how can you get there. Once I was a bit bored so I picked the agile manifesto and reversed it for fun. When I looked at what have I done with this poor text, I realized that the anti-agile manifesto perfectly describes how most companies and customers in the IT industry still work. Let’s have a look: We are uncovering better ways of developing software by doing it and helping others do it. Through this work we have come to value: Processes and tools over individuals and interactions Comprehensive documentation over working software Contract negotiation over customer collaboration Following a plan over responding to change That is, while there is value in the items on the right, we value the items on the left more. Do you recognize your own company or customer? I guess you do. It perfectly describes large companies, where you have to follow the dedicated company processes, use the dedicated tools, where you have to carve everything into stone in order to protect your butt. It describes a company where numbers and deadlines are defined by a contract, and where the planning is based on what’s in the contract and not based on what creates value and what makes sense. It may seem that I’m angry at large companies and I’m disappointed about how they work. I am. But instead of just complaining my goal is to figure out if there is a way to change this without acquisition, outsourcing, insourcing, creating subsidiaries. Can you make an anti-agile organization into agile organization without total destruction? There are several “experiments” out there, for example the SAFe framework, which will be the subject of many of my future articles. If you enjoyed reading this article, please support me with a few claps. Thank you!
https://medium.com/agileinsider/the-anti-agile-manifesto-ac1bb3b1cb8
['A Product Owner With A Pen']
2020-12-29 15:04:38.513000+00:00
['Product Owner', 'Agile Manifesto', 'Software Development', 'Agile', 'Productivity']
8 Tips to Solve Vaginal Dryness and Overcome Painful Intercourse
8 Tips to Solve Vaginal Dryness and Overcome Painful Intercourse Poor lubrication is a common condition you don't have to live with Photo by Dan Gold on Unsplash Our bodies change as we grow older, but we do not have to age without a fight. Vaginal dryness is a common condition affecting 50–70% of women after menopause. A lack of lubrication is the most common cause of dyspareunia (dis-puh-ROO-nee-uh) or painful intercourse. A lack of vaginal moisture may have a significant impact on a woman’s sex life. Sex does not have to hurt. Fortunately, there are many available options to provide the vagina with moisture and make sex pleasurable. Here are eight tips to help solve the problem of vaginal dryness. Photo by Marc Zimmer on Unsplash Water-based sexual lubricants Sexual lubricants reduce friction. Lubricants enhance the experience and eliminate painful chafing and burning. The application of sexual lubricant as a part of foreplay is a simple trick for many women to improve sexual pleasure. Lube may be applied directly to the vagina, clitoris, or penis before penetration. Not all lubricants are the same. Most lubricants available at a local pharmacy are water-based. Water-based lubricants are the safest choice to prevent sexually transmitted infections. They do not break down latex in condoms or sex toys. Water-based lubricants are ideal for those with sensitive skin or those prone to vaginal irritation. Water-based lubricants do not leave stains on sheets and are easy to clean in the laundry. Popular brands include Astroglide and KY. On the downside, water-based lubricants dry out quickly. Rapid drying may require reapplication for longer sexual sessions. For those suffering from chronic vaginal dryness or inadequate natural lubrication during sex, they may be insufficient. Water-based lubricants are often suboptimal for postmenopausal women. 2. Silicone-based sexual lubricants Silicone-based lubricants stay slippery longer, avoiding the frustration of rapid drying and reapplication messiness. This type of lubricant does not evaporate when exposed to air and provides a wet sensation for longer sexual sessions. Since they are not water-based, silicone lubes may be used in the shower or bathtub. One popular brand is Platinum Wet. This paraben and glycerin free product reduce the risk of chemical irritation. Penchant Premium is another hypoallergic option ideal for those with sensitive skin. Silicone-based lubricants may be more effective and pleasurable for monogamous couples at low risk for sexually transmitted infections. Non-water-based lubricants have downsides too. They are messy. Soap and water must be used to remove them, and they will also stain sheets. Silicone-based lubricants will break down latex condoms or latex in sex toys. Photo by Alejandra Quiroz on Unsplash 3. Foreplay We can not underestimate the value of foreplay for those with chronic vaginal dryness. Foreplay gets the juices flowing by increasing sexual arousal. Blood flow to genitals increases, causing the vagina, labia, and clitoris to swell. Better stimulation leads to more natural lubrication and an increase in vaginal elasticity. The vagina then secretes natural lubrication, which increases pleasure and reduces pain. For those with chronic vaginal dryness, communicating sexual needs and desires to your partner is critical. Sexual partners may be unaware of the physiological changes and not know what to do. Providing feedback allows the partner to learn how best to meet your needs. 4. Vaginal Moisturizers Multiple products over-the-counter products are available to help increase vaginal moisture. The basic idea is to prevent dryness and improve the vaginal ph balance. These products do not treat the underlying cause of vaginal dryness. They are useful for day to day use to alleviate discomfort and improve vaginal moisture. Those will sensitive skin may benefit from avoiding products containing parabens, glycerin, or propylene glycol. These additives may cause skin irritation. Popular products include Replens and Luvena. Photo by Christiann Koepke on Unsplash 5. Vaginal Estrogen Estrogen is one of the most important female hormones. A decrease in the production of estrogen by the ovaries triggers physiological changes in the vagina. The vaginal tissues thin out, and lubrication decreases. Breastfeeding, certain medications, and menopause all cause estrogen deficiency. Prescription estrogen medication can be placed directly into the vagina to offset the deficiency. Low dose Estrogen applied directly to the vagina bulks up vaginal cells and increases blood flow. Restoring vaginal health allows cells to produce more moisture. Natural lubrication improves, and the more elastic vaginal walls reduce resistance to trauma during penetration. Vaginal estrogen comes in a variety of forms from creams, rings, and vaginal tablets. All require a prescription and evaluation by a health provider before use. 6. DHEA supplements The DHEA hormone is a highly effective alternative to estrogen therapy. Before menopause, DHEA levels are high. As the ovaries stop functioning, DHEA levels fall. DHEA supplements can be placed into the vagina to restore vaginal health, reduce intercourse pain, and improve vaginal dryness. Only one medication is commercially available. Prasterone, sold under the brand name Intrarosa, is a plant-derived form of DHEA. Prasterone is inserted into the vagina once a day through an easy to use applicator. DHEA converts into estrogen targeting the underlying cause of dryness. It is FDA indicated for painful intercourse. Prasterone requires a prescription and monitoring by a health provider. 7. Selective Estrogen Receptor Modulators Another alternative to estrogen therapy is a class of drugs called SERMS (Selective Estrogen receptor modulators). These medications act directly on the hormone receptors. Direct targeting of the vaginal tissue increases the thickness of the superficial and parabasal vaginal cells. Like estrogen and DHEA therapy, the restored health of the vaginal cells improves the natural ability to produce moisture and lubrication. Ospemifene is the only FDA approved oral drug for vaginal dryness and painful intercourse. Ospemifene requires a prescription and monitoring by a health provider. 8. Topical Sildenafil Sildenafil is a popular medication used to treat erectile dysfunction in men by increasing blood flow to the penis. Some women benefit from the topical application of Sildenafil. A pea-size amount of compounded Sildenafil cream is applied directly to the clitoris before intercourse. Sildenafil cream increases blood flow to the clitoris. For some women, the increased blood flow triggers more natural lubrication, improves pleasure, and helps to achieve orgasm. Sildenafil requires a prescription and monitoring by a health provider for this off label use. Vaginal dryness is a problem you don't have to ignore Vaginal dryness is a common symptom among women. Painful intercourse and vaginal discomfort are not problems to be tolerated. These options can help one restore vaginal health and enhance the sexual experience.
https://medium.com/beingwell/8-tips-to-solve-vaginal-dryness-and-overcome-painful-intercourse-a91ce9d48af1
['Dr Jeff Livingston']
2020-05-31 16:20:27.217000+00:00
['Menopause', 'Sexual Health', 'Sex', 'Women', 'Health']
The Powerful Unifying Effect of Service
Mending social divisions through service America is still the land of opportunity. This is made clear by the fact that so many around the world want to be here. Yet the same opportunities are not necessarily available to all Americans. Prejudice continues to contribute to disparities in areas like employment, wages, social mobility, and policing. The manner in which we approach this problem is critical. If we are tempted by proposed quick-fixes for these disparities — more diversity quotas, racial wealth redistribution, defunding police, etc.— we will drive the country further apart. If we instead look for ways to remedy the underlying cause of the disparities — prejudice — we can bring the country together like never before. Unfortunately, the human brain is wired for prejudice. Negative preconceptions thrive in environments where interaction between different groups is limited. The solution is to create positive interactions between people of different identity groups to override the negative programming that may exist. The key here is positive interactions. Certain conditions must be present to produce the desired outcomes of acceptance and tolerance. Psychologist Gordon Allport, in his seminal 1954 work The Nature of Prejudice, identified four conditions necessary for inter-group activity to result in a reduction in prejudice. These are equal status, common goals, cooperation, and institutional support. In psychology, this is now referred to as the contact hypothesis. In my experience, these conditions provide the bedrock for effective teambuilding in nearly any environment. The four conditions of equal status, common goals, cooperation, and institutional support illustrate why service is capable of mending our social divisions. Equal Status Equal status means that all members of a team engage as equals in its endeavors. Similarities between members should be emphasized and differences minimized. It is important to negate any perception of a hierarchy among teammates. For example, upon arrival to boot camp all military recruits get the same haircut, put on identical uniforms, and begin at the lowest rank. Civilian service participants typically wear team shirts and interact as equals, regardless of who they are or where they came from. This equality provides the foundation for cohesion within these groups. Cooperation towards a Common Goal A shared goal is critical for positive outcomes. The goal must also demand cooperation between members. This dynamic is prevalent in sports, where teammates form close bonds while striving to win a championship. These conditions are also ever present within the military, especially during times of war. Few goals are more unifying than protecting and defending the interests of your country, and nothing has higher stakes for cooperation. Civilian service has an endless list of possible shared goals. These might include rebuilding a community after a disaster, working on critical infrastructure projects, or developing local solutions to fight poverty, to name a few. And of course, cooperation plays a huge role in all these efforts. Institutional Support Institutional support refers to support for positive inter-group interactions by those in charge. This support might be expressed through an institution’s policies, laws, or customs. Organizational leaders should enforce measures that ensure equal treatment and condemn comparisons between identity groups. Leadership within the military and civilian service organizations have continually demonstrated a commitment to ensuring equal treatment and equal opportunities for all their members. The four conditions of equal status, common goals, cooperation, and institutional support illustrate why service is capable of mending our social divisions. Most diversity efforts in America today fall short because they fail at fulfilling one or more of these necessary conditions. Why current diversity efforts fall short If we are willing to take a hard look at the state of prejudice in America today, a few concerns emerge. First, using diversity as selection criteria is an example of treating the symptoms — lack of diversity and inequality — rather than the underlying cause — prejudice. While there are benefits to having a diverse workforce, hiring or promoting for the sake of diversity and at the expense of qualifications can deepen prejudice. This is because it gives the appearance that there are differences in capabilities between identity groups, failing the condition of equal status. In almost all cases, qualified diverse candidates are out there. Yet all too often employers do not engage in the necessary outreach efforts to find them. Second, there is a lack of cooperative environments for inter-group interactions. While Allport observed that inter-group cooperation leads to a reduction in prejudice, he also noted that competition exacerbates it. Competition — not cooperation — defines the majority of interactions between identity groups today. Online sparring on platforms like Twitter probably comprise the worst of it. But competitive work environments where personal goals override organizational goals can also contribute to prejudice. A two-year service commitment dodges this competition pitfall, as it is a temporary experience where performance among peers is not tied to metrics of societal success and recognition. Competition — not cooperation — defines the majority of interactions between identity groups today. Finally, the band-aid approach known as diversity and inclusion training has proven to be woefully ineffective. One study found that asking white Americans to think about the concept of white privilege led to more racial resentment in later surveys. Telling people they should not be racist will never have the impact of showing them their racist beliefs are unfounded through positive interactions with others.
https://mattvisnovsky.medium.com/the-powerful-unifying-effect-of-service-cca58e117b94
['Matt Visnovsky']
2020-12-17 08:33:58.603000+00:00
['National Service', 'America', 'Politics', 'Society', 'Race']
How to Buy Stocks During the COVID-19 Pandemic | The Motley Fool
How to Buy Stocks During the COVID-19 Pandemic | The Motley Fool Staff Follow Aug 18 · 6 min read Second-quarter 2020 earnings report cards have been rolling in, giving investors a wealth of information on how businesses have been dealing with the worst of the economic lockdown. There has been no shortage of surprises, with many companies doing remarkably well — some even getting a boost related to efforts to halt the spread of the COVID-19 pandemic. Such is the case with economic downturns. Winning businesses keep winning, new winners are created, and the decline of those already struggling gets hastened. To help sort through the best opportunities from this downturn, I’ve been categorizing stocks into four basic groups: Stocks getting a bump from the pandemic Those growing in spite of the pandemic Stocks temporarily affected but still on solid footing Companies that need serious help (like an abrupt end of the pandemic) to survive Groups one and four are attracting loads of investor attention, but exercising some caution here is a must. Image source: Getty Images 1. The great unknown isn’t all bad A rare and impactful event like the one the world currently finds itself in creates extreme uncertainty, throwing many organizations that weren’t prepared for the unknown into treacherous waters. But for some, uncertainty spells new possibilities. A handful of stocks have roared higher this year (and propped up stock indices overall, creating a perception that markets are totally disconnected from reality). And for some of these businesses, the jump higher is for good reason. E-commerce is one industry that was already growing fast and that has experienced booming demand amid the pandemic. Software services are another, as the digital world continuously increases in importance. But danger could be lurking. The outsized benefit that some of these companies are getting due to COVID-19 won’t last forever, but their stock prices seem to assume that they will. When the short-term shot in the arm wears off, an adjustment to share prices may be in order. One example that has already played out is Alteryx (NYSE:AYX). Though the company has remained in growth mode (who’s going to argue with a 17% revenue jump in Q2 2020?), shares had nearly doubled in value from the start of the year on expectations that there would be booming demand for data science software. While Alteryx’s outlook going forward is far from shabby, it didn’t warrant the run-up headed into the second-quarter earnings season — thus the roughly 40% “readjustment” to the stock price over the last two weeks. The lesson? It pays to think about whether short-term positive effects for some businesses will be enduring or not. And if they aren’t enduring — or never transpire as expected — a surging share price can be a dangerous thing to chase. 2. Doubling down on longer-term secular growth trends Rather than pile into hot stocks that may or may not be able to sustain their current momentum, a great place to look for stocks to invest in right now are those businesses growing in spite of the current world situation. Familiar names benefiting from long-term trends dwell in this realm. An example is digital ad and social media giant Facebook (NASDAQ:FB). Facebook indicated that its results were mixed during the pandemic. Revenue slowed to just 11% growth in Q2 (down from 18% in the first quarter) on lower ad spending, but was offset by higher social media user engagement. Despite an imperfect quarter, the business continues to chug higher, and though shares have advanced 27% higher this year, it’s not a totally unreasonable amount given Facebook’s financial strength ($58.2 billion in cash and short-term investments at the end of June) and highly profitable business model. It may not be a particularly exciting company anymore, but that doesn’t mean investing gains aren’t to be had. 3. Betting on a near-sure-fire (eventual) rebound There are numerous other businesses that haven’t fared so well, but nonetheless are on solid footing and stand to recover quickly as effects of the pandemic wear off. Google parent Alphabet (NASDAQ:GOOGL) (NASDAQ:GOOG) is one. Though search-based ads took a hit during the spring of 2020 and led to an overall 2% decline in revenue, its cloud computing segment stayed strong with a 43% increase. And as the economy begins to recover, Google’s ad business almost certainly will too. In decidedly less fit shape than the internet search leader is Comcast (NYSE:DIS). On the surface, this company doesn’t look so hot as cable TV and phone subscribers flee. But high-speed internet is the main driver of results here (and a modern staple), and the broadcasting business is holding its own as well. Though revenue fell 12% during the second quarter, Comcast is still easily in profitable territory — having generated $9.29 billion in free cash flow on revenue of $50.3 billion through the first half of 2020. In spite of their flaws and pullback in growth at the moment, businesses like Google and Comcast will be just fine once the dust settles. 4. Stocks with a less-than-certain future And now for the last group of stocks, those that really need COVID-19 to be beaten sooner rather than later. The travel, hospitality, and restaurant industries are examples of stocks that have been hit especially hard, but some investors have piled into some of these stocks as if they are nearly risk-free. Granted, some of them have a lot to gain if they can survive. Within the cruise industry, Carnival (NYSE:CCL) comes to mind. As travel starts to open up again, shares trading over 70% lower than where they started 2020 could be in for a big bound higher. But there are ample risks. Substantial debt has been taken out to make up for the cash crunch with ships stuck in port. And once the industry can set sail again, it remains to be seen how quickly vacationers will return. Simply put, future viability (and just how profitable it will be) of the business model is far from certain. Granted, not all stocks that dwell in this “uncertain” realm are an ultra-high-risk investment. Though sales took a 42% hit on the chin during the spring, I have a hard time imagining Disney (NYSE:DIS) being in serious trouble in another five years from now. Sure, its parks are currently under heavy restrictions, but the burgeoning streaming TV unit is picking up subscribers by the tens of millions each quarter. The dominant movie studio is also testing the waters of new movie distribution. With the addition of Fox, Disney took 40% of the U.S. box office in 2019 and laying claim to seven of the top 10 grossing films. That gives it some serious power to try out the controversial (but potentially industry-disruptive) move of bypassing theaters and sending Mulan straight to Disney+ with a $29.99 price tag. And though it most certainly needs an end to COVID-19 if it wants its parks to be on a firm foundation again, Disney isn’t at risk of going kaput anytime soon, with $23.1 billion in cash and equivalents on the books. Nevertheless, thinking about what a company needs to recover (reliance on debt to make ends meet is a big red flag) during this current period can save an investor from later pain. So can thinking about the business environment post-coronavirus. Don’t simply assume a surge in business growth in the last quarter is part of the “new normal.” It likely isn’t. Some prudence on both ends of the spectrum will go a long way toward building a solid portfolio for the decade ahead.
https://medium.com/the-motley-fool/how-to-buy-stocks-during-the-covid-19-pandemic-the-motley-fool-8030ef9e7422
[]
2020-08-18 17:58:46.943000+00:00
['Stock Market', 'Disney', 'Covid 19', 'Facebook', 'Stocks']
Deploy a load balancer and multiple webservers on AWS instances through Ansible
Deploy a load balancer and multiple webservers on AWS instances through Ansible Let’s see how to provision AWS instances and deploy Webservers and HAProxy Server with the help of Ansible dynamically. Welcome back to my another article, Here you will get the solution to many problems like how to provision AWS instance and retrieve its IP dynamically as well as you will learn about HAProxy server and at the end of the article, you will see the deployment of Webservers on the AWS instances through Ansible. These webservers are accessible for the client through the Load balancer and how we can scale up & down the Webserver dynamically. So let’s get started. Load Balancer load balancer acts as the “traffic cop” sitting in front of your servers and routing client requests across all servers capable of fulfilling those requests in a manner that maximizes speed and capacity utilization and ensures that no one server is overworked, which could degrade performance. If a single server goes down, the load balancer redirects traffic to the remaining online servers. When a new server is added to the server group, the load balancer automatically starts to send requests to it. In this manner, a load balancer performs the following functions: Distributes client requests or network load efficiently across multiple servers Ensures high availability and reliability by sending requests only to servers that are online Provides the flexibility to add or subtract servers as demand dictates The architecture of Load Balancer HAProxy HAProxy, which stands for High Availability Proxy, is a popular open-source software TCP/HTTP Load Balancer and proxying solution which can be run on Linux, Solaris, and FreeBSD. Its most common use is to improve the performance and reliability of a server environment by distributing the workload across multiple servers (e.g. web, application, database). It is used in many high-profile environments, including GitHub, Imgur, Instagram, and Twitter. Ansible Ansible is an open-source software provisioning, configuration management, and application-deployment tool enabling infrastructure as code. It runs on many Unix-like systems and can configure both Unix-like systems as well as Microsoft Windows. It includes its own declarative language to describe system configuration. Ansible was written by Michael DeHaan and acquired by Red Hat in 2015. Ansible is agentless, temporarily connecting remotely via SSH or Windows Remote Management (allowing remote PowerShell execution) to do its tasks. Hopefully, now you have a little bit of idea about above mentioned So let’s see what are the pre-requisites and how you can design the below-mentioned architecture using ansible. Architecture of HAProxy So you are going to design this architecture using the Automation tool Ansible. Pre-requisite To configure the above architecture you have some pre-requisite which are mentioned here. Configured Ansible Controller node 4 or more than 4 AWS instances. IAM Account Create an IAM Account Create an IAM Account with programmatic access to get the access and secret key so that we can access the AWS services. So you have to click on the next and next with proper access, your IAM account will be ready after a minute but if you are no comfortable with this process then simply visit the below-mentioned article. Configuration of the Controller node After creating IAM Account, now you have to install ansible on your system that’s known as a controller node. So if your controller node isn’t configured then you can visit the below-mentioned article where you will find lots of information about the installation of Ansible. Create a Vault Ansible Vault is a feature of ansible that allows you to keep sensitive data such as passwords or keys in encrypted files, rather than as plaintext in playbooks or roles. Alternately, you may specify the location of a password file or command Ansible to always prompt for the password in your ansible. cfg file. So simply run the below command and put your access and secret key inside it which you downloaded from your IAM Account and then save it with your suitable password. ansible-vault create vault_file_name.yml Install boto library Here we can use our localhost IP address to behave as a managed node and we will use the SDK to launch the ec2 instance on AWS as Ansible is built on python language so we will be using boto. Boto as it an API so it has the capability to contact AWS. So install boto in your controller node. sudo pip3 install boto3 Write a playbook for launching the Instances After installing the boto library, you can provision the EC2 instances through the Ansible playbook. So just create the playbook and write the below code. After creating the playbook, run it with your vault using the below command, and give your password which you gave during the vault creation. ansible-playbook --ask-vault-password playbook_name.yml So after running the playbook, let’s see your EC2 Dashboard and check the instances are provisioned or not. and write your private IP somewhere which you will get after running a playbook. Copy a key pair to a Remote System with the scp Command To copy a key from a local to a remote system run the following command: scp -i key_name.pem remote_username@10.10.0.2:/remote/directory Where key.pem is the name of the private key we want to copy, remote_username is the user on the remote server, 10.10.0.2 is the server IP address. The /remote/directory is the path to the directory you want to copy the key to. If you don’t specify a remote directory, the file will be copied to the remote user home directory. Give user login permission for a managed node through Ansible. cfg to the controller node Now if you want to login to your EC2 instances dynamically, So write the below code in your ansible.cfg file. Because by default, login of the root has disabled. So you can’t log in with the root account in AWS. So go inside the below-mentioned location of the controller node and write the below syntax. sudo vi /etc/ansible/ansible.cfg Make an inventory and write the info of Instances Now your config file has been configured in the controller node so you can do anything on your instances (managed nodes) whatever you want to do so write the private IP and username which you got when you run the playbook for provisioning the Instances. And check that your managed nodes are available or not using the below command. ansible all --list-hosts Check your EC2 Instances (Managed nodes) are pingable or not. ansible all -m ping After checking all these things, create the roles to deploy the web server and load balancer in the Managed nodes (EC2 Instances). Create Roles for deploying the Webserver and load balancer. Roles let you automatically load related vars_files, tasks, handlers, and other Ansible artifacts based on a known file structure. Once you group your content in roles, you can easily reuse them and share them with other users. Role directory structure. Storing and finding roles. So use the below command for creating the role. ansible-galaxy init role_name So after creating roles for load balancer and webserver, Install HAProxy software in your controller node using the below command. yum install haproxy -y After installing HAProxy, go inside /etc/haproxy/haproxy.cfg and copy haproxy.cfg into the ib_role/templates/ directory which is inside your load balancer role. Here you can use the cp command to copy the haproxy.cfg file. Then open the haproxy.cfg file which is inside lb_role/templates/ and binds the port 8080. Also, write the below-mentioned jinja code to update the haproxy.cfg file to load balancer dynamically. After completing the template, open your lb_role/tasks/directory and write the below code inside main.yml to configure the load balancer. So your load Balancer role is done, now open your web_role/files/ and make a webpage but I have mentioned here a PHP webpage with the name of index.php. Now open your web_role/tasks/directory and write the below code inside main.yml to configure the load balancer. So there are two tasks are created. one is for WebServer and another one is for Load Balancer. Create a playbook for roles Now create a playbook to run your roles which contains only the information of your load balancer and web server instances and the location of roles path. Now run the playbook using the below command. ansible-playbook playbookname.yml After running the playbook, let’s both webservers and load balancer are working properly. So take the IP of a load balancer and browse with lb_pubic_ip:8080 It’s working well with two web servers so now if you want to add one or more than one web server so let’s check how to add a new webserver with the load balancer. So provision one more instance with ec2.yml playbook which is mentioned above. Open your inventory file and write the information about the newly launched instance which you want to make a web server. After adding the information of a new instance, run again your playbook. Running the playbook again Now you can see in the above clip one new web server has been configured successfully. So let’s check the final output. Final Output Load Balancer is also working great with all three web servers as you can see in the above clip. Conclusion Here you have learned how we can launch the AWS instances as well as you saw the about roles and at the end of the article, your all setup is ready to update and configure the load balancer and webservers dynamically. Here you can add how many web servers you want, only write the information about the operating system in an inventory file, and run the playbook. I tried to explain as much as possible. Hope You learned Something from here. Feel free to check out my LinkedIn profile mentioned below and obviously feel free to comment. I write Cloud Computing, Machine Learning, Bigdata, DevOps and Web, etc. blogs so feel free to follow me on Medium. Thanks, Everyone for reading. That’s all… Signing Off… 😊
https://medium.com/hackcoderr/deploy-a-load-balancer-and-multiple-webservers-on-aws-instances-through-ansible-40f77e864438
['Sachin Kumar']
2020-12-28 07:43:54.170000+00:00
['Web Server', 'Load Balancer', 'AWS', 'Ansible', 'Haproxy']
Thanks Facebook, Now We All know Why Privacy Is Important
It’s an unfortunate truth that most people today still don’t understand the importance of privacy online. But thanks to Facebook, the world just got the harsh wake up call that it needed. Now that we’ve begun to move our money online in a very literal sense with cryptocurrencies, this lesson is more poignant than ever before. The cryptocurrency world is all about decentralization. Decentralization makes things stronger because it reduces the amount of single points that can lead to failure. However, our current world comprises of massive centralized institutions that collect untold petabytes of personal data (with our permission via incomprehensible contracts signed with a click or a tap). The degree by which data was harvested from Facebook by said institutions demonstrates the need for better security management and the need for decentralized security practices. The Password Conundrum It can be hard to visualize how much people don’t pause to consider their own privacy and security online. To start with an example, let’s talk about passwords. It’s a well-known fact that most people use weak passwords. It’s also common practice for people to use the same password on many websites. This makes sense. Passwords are difficult to create, and even more difficult to remember. The problem with this type of personal security management is immediately evident, however. If a single website is compromised, then nefarious individuals could potentially gain access to almost all of your digital life. Even with access to a single email account could unlock your world to an attacker through password reset forms. There’s an easy solution for this, however. Password managers can be used to create unique and complex passwords for each and every website that we use. They can keep password data encrypted and off-line, for those that are extremely security conscious. So there’s a relatively easy solution for password management. And yet, most people still don’t take their privacy and security seriously. So what about how we handle our personal information online? We put our trust in services like Facebook and Google. If they are compromised either through a hack or through gray area data mining, the effect is still the same. Our personal, sensitive data gets used in ways that we did not implicitly or explicitly approve of. To make things worse, the amount of data that services like Facebook collect about us is truly alarming. This is doubly so if you use a Facebook app on a mobile device and allow that app to track your location via your GPS signal. The Facebook Behemoth Facebook knows almost everything about you. It knows your name, where you live, where you work, who your friends are, who your family members are, how old you are, what your interests are, where you went last night, your shopping habits, your level of income and so on. This data can then be cross-referenced billions of times with other data points in a huge mesh. Facebook, or companies that get access to Facebook’s data via partnerships, can make startling conclusions about you without even needing to ask you personally. For instance, if you’re a 32-year-old single female with one older brother, Facebook might be able to determine that your favorite type of fruit preserves is strawberry and not raspberry, based only on comments made by thousands of other individuals with similar life circumstances. This example may seem silly, but Facebook data has been used to determine far more personal secrets about people, and the results are always chilling to say the least. Money Talks It’s been said that money is a form of language. It communicates our wants, our goals, our efforts and our trustworthiness. But today, how many people really think about how their banks or financial networks monitor them? While the limelight has been placed on Facebook, the truth is that there are hundreds or more untold shadowy financial industries and companies that are watching your every move. It all starts with your bank. Your bank knows how much money you have, how much money you owe to them, where your money comes from and where it goes every day. This information can be incredibly powerful. Next, payment processors like Visa and PayPal similarly get full access to everything you do, and in some cases their knowledge could spread even wider than just your bank. Third, are the credit reporting agencies that keep a permanent record of every credit card payment you made, every time you were late on your student loan repayment. Even if you are getting divorced, they will know it. Can You Get out of the System? While it’s easy to simply choose not to participate in Facebook, today it’s not possible to just opt out of the banking system, and the credit reporting system. These services have become requirements, much in the same way as a fast internet connection is required today in order to keep up with society. Even if you have never communicated with credit reporting agencies, they know about you, and they are following your every move. Want to rent an apartment? Credit check required. Applying for a job? More often than not, credit check also required. Thinking about buying a house or a car? No doubt your credit will be checked, possibly even if you are paying in cash. In 2009, an individual calling themselves Satoshi Nakamoto created bitcoin, and the first technology that had the power to lead people away from banks, payment networks and credit scoring agencies was born. Following this, many new cryptocurrencies came out, including privacy focused projects that aim to not only offer complete financial control of one’s own assets, but to be able to do so in absolute privacy. Interest in these types of cryptocurrencies has grown rapidly. Privacy focused projects like PIVX, with its full compliance with the Zerocoin protocol, gives its users total anonymity when transacting. Other projects like Monero and Zcash use technologies like ring signatures and zk-SNARKs (a personal favorite of Edward Snowden) have also grown and helped shape the industry at large. What these cryptocurrencies represent is a step towards the privacy, security, and decentralization that we all, quite frankly, deserve. The Next Step Towards Financial Privacy Now that we have cryptocurrencies that protect privacy, we need a way to connect those valuable digital assets with the old monetary system so that the eventual transition towards cryptocurrency dominance can continue. Real-life use cases are essential, and we believe we have a solution that can not only offer decentralization and privacy, but also offer something that is indeed able to attract the next 100 million users to the blockchain. Celsius Network is launching later this year, and with it, it will bring the ability for those that hold or want to hold blockchain assets to to do so in a secure manner that also enables them to secure low interest cash loans. Celsius does not operate in the realm of the traditional banking sector. As such, it does not rely on credit reporting agencies, and thus it increases one’s access to financial privacy and independence from the system. Envisioning a High Privacy Loan While it’s unfortunate that banks are still essential to many financial transactions, we believe that taking the steps we are taking at Celsius moves us closer to a post-banking world. Let’s imagine for a moment a circumstance where someone could get an extremely high privacy loan that does not touch credit reporting bureaus. First, an individual can open an account at a small not-for-profit bank such as a credit union. These kinds of institutions will be highly unlikely to sell your private information since they are not guided by the profit motive. Next, they can transfer their value (or even choose to get paid in some cases) in a private currency like PIVX. From there, they can anonymously convert their PIVX into bitcoin or Ethereum and deposit it into their Celsius wallet. Now we can use Celsius Network to secure a cash loan that can be deposited at their credit union or bank. As an added bonus, since a loan does not count as “income,” it does not need to be reported to tax authorities because there are no capital gains that need to be paid. Always Moving Forward Is this solution perfect? It isn’t yet, but we feel that by combining the power of privacy-centric blockchain assets and the Ethereum network to create a decentralized approach is one way that anyone seeking privacy can find it with little difficulty. At launch, Celsius plans to support Bitcoin and Ethereum, but we intend to extend support to more and more of the top digital coins, including those that are strong on privacy features. Let’s hope that the travesty that occurred with the Facebook data mining incident can serve as a catalyst and not simply a warning. Privacy is an inalienable human right. But it’s something that needs to be taken (with permission), and not something that is just given out freely. Especially not by those who stand to gain from manipulating and selling your data.
https://medium.com/hackernoon/thanks-facebook-now-we-all-know-why-privacy-is-important-fccd630d73d1
['Alex Mashinsky']
2018-04-02 16:45:07.901000+00:00
['Bitcoin', 'Privacy', 'Facebook', 'Thanks Facebook', 'Social Media']
What Digital Health Learned From Netflix: How Data Science Is Creating Self-Learning Healthcare
By Eric Williams, Director of Data Science @ Omada Health In 1991, millions of postmenopausal women were given a very good reason to be in a very good mood. It turns out the same hormone replacement therapies they’d been prescribed to balance their emotions came with an unexpected side-benefit: a much healthier heart. That year, a meta-analysis in Preventive Medicine breathlessly announced that HRT was responsible for a 50% decline in heart disease. Let that sink in. Fifty percent. The study’s authors became lauded scientists for having effectively uncovered a way to slash the number one cause of death for women in half. Some doctors even began advising female patients to take HRT for healthier hearts alone. There was only one problem. The study was totally wrong. It took over a decade to unravel all the flaws of the authors’ meta-analysis. But their mistake was just the beginning. A 2002 randomized control trial proved the opposite of their assertion was true: Estrogen replacement therapy has no effect and, potentially, increases the risk of heart disease. So what went wrong with the initial meta-analysis? “Correlation does not equal causation” is probably the most quoted (and neglected) mantra from your Statistics 101 professor or any data scientist in the field. However, the misleading implications from this common stat-trap are particularly dangerous when it comes to health research. In the case of HRT and heart disease, it took over a decade to unravel the fact that affluent women were more likely to get HRT and — here’s the clincher — take care of their heart health. And that was just one of many potential overlapping factors that lead to false conclusions. For over a century, randomized control trials (RCTs) have been the gold standard scientific methodology for testing not simply correlation, but causation — and rightfully so. But RCTs come with their own sets of challenges and limitations. Clinical trials tend to be slow, labor intensive, and expensive. Even more troubling, results from clinical trials often don’t generalize to broader populations, due to difficulty and biases introduced by patient recruitment. But today, we are on the precipice of a revolution in healthcare that has the potential to accelerate the discovery of causal links and enable a healthcare company of any size to test connections between courses of treatment and healthcare results for all types of patients. Call it the “burden of proof” transformation — the increasingly sticky idea that healthcare costs should be paid based on outcomes and not on activity. The rise of outcome-based healthcare reimbursement has aligned the motivations of payers and providers, and has the potential to kickstart the industry toward generating more efficient and effective healthcare solutions. At the same time, the explosion of digital health has given those in healthcare the unique opportunity to leverage data science and capture the full value of the vast amounts of health data, creating self-learning health systems that will lead to more effective healthcare for millions of Americans. Successful preventative healthcare is dependent on two things: accurately identifying who is at risk and determining how to intervene. The field of data science and its methodologies — namely analytics, machine learning, and experimentation — have the potential to completely change both the identification of those at risk (prediction) and the optimization (personalization) of their care. Here are three key ways data science is revolutionizing care and providing the potential for precision population health for the first time in human history. Prediction is the key to any successful preventive strategy — especially in healthcare. But a predictive model is only as good as the data that underlies it. Google’s Chief Economist Hal Varian is famous for stating: “[Google] doesn’t have better models; it just has more data.” Until recently, machine learning has mostly benefited the digital marketing space. That’s why your typical online interaction today will be peppered with targeted ads and product recommendations, which maximize the odds that you’ll click-on, subscribe to, or purchase a particular product. Or, google “Target knows you’re pregnant” to see how data analytics and machine learning can beat our brains to even some of the most personal revelations, predicting what products we’ll need even before we realize it ourselves. Applying predictive modeling to healthcare is revealing itself to be a game-changer. What if, instead of using machine learning to suggest which movies you should watch next in your Netflix queue, it was harnessed to pinpoint those “tipping point” individuals who are most likely to forget to take a medication, miss a critical doctor’s appointment, or fall off the wagon of a diet or new exercise routine? Clover Health, a technology-enabled Medicare Advantage provider, is making a big bet on predictive analytics impacting preventative care. Starting with roughly 16,000 Medicare Advantage members in six New Jersey counties, Clover’s data scientists use claims and lab data to predict members most at-risk of illness. Once patients are identified, Clover’s nurse practitioners are deployed to their homes as a preventative intervention. Since Medicare Advantage plans receive subsidies from the government to cover both the premium and the claims of their members, effective prevention is fully aligned with Clover’s business incentives. In the first half of 2015, Clover has claimed that these methods have reduced hospital admissions of their members by 50% and hospital re-admissions by 34%. In concert with predicting who is most at risk, effective prevention needs an effective intervention. The experimental tools of data science, when deployed smartly, can do just that. The undeniably successful use of marketing optimization and user learning displayed by companies like Google and Netflix have made A/B testing table stakes for most technology companies these days. The process of hypothesis generation, randomization, and evaluation is now common language from developers to CEOs. Of course, A/B testing is old news, especially in the healthcare field. In fact, it largely originates from healthcare. Some trace the origins of systematic clinical trial design back as far as 1747 when surgeon James Lind tested six different proposed “cures” for scurvy (including, but not limited to: seawater, cider, vinegar, and — thankfully — citrus) on the crew of the HMS Salisbury. Since then, experimental design has solidified the “Randomized Control Trial” (basically, an A/B test) as the widely accepted gold standard for experimental measurement. The goal, no surprise, is to help determine causation between a delivered intervention and a primary outcome. But as the amount of available data has exploded, so too has the possibility of the “super-charged RCTs” — rapid A/B tests exploring multiple facets of interventions that give unprecedented insight into what works in healthcare. When it comes to preventing chronic disease, super-charged RCTs create new opportunities to understand human behavior, personalize, and optimize interventions to deliver the best health outcomes. But if a model is only as good as its data, then an A/B test is only as good as its outcome. Which means super-charged RCTs only matter if you can measure actual health outcomes. In digital health, this is surprisingly rare. At the time of writing, there are over 165,000 mobile apps available claiming health benefits and very few have any evidence to back up those claims. In fact, the gap between the health claims made by these apps and the troves of data they are collecting has become so enormous that a startup has been created to bridge this gap. Evidation Health, a Silicon Valley company, recognizes this missed opportunity. They aim to align the data collected by digital health interventions with the outcomes captured by health plans. This way, they can validate (or not) the health claims made by those companies and find the most effective interventions for health plans to implement. Established health companies also need to accept this responsibility. They’ll need to measure their effectiveness against promised outcomes if they want to truly capitalize on their own vast amounts of streaming data and optimize their interventions against these outcomes using iterative, RCT-like methodology. But this responsibility can just as easily be viewed as an opportunity — digital health companies should take advantage of the evolving field of experimental design. We can implement the most flexible, adaptive trial design, and build systems that intrinsically improve with scale. Once more digital health companies embrace this approach, we’ll see massive changes in the ways that technology can influence healthy, sustainable, and scalable behavior change. The ultimate goal of these efforts is a self-learning healthcare system — one that generates continuous feedback on the most effective approach for populations and individual patients, then incorporates that feedback to create a virtuous cycle of improvement. The BioMe Biobank Program, lead by The Charles Bronfman Institute for Personalized Medicine in the Icahn School of Medicine at Mount Sinai hospital, is an effort to capitalize on the power of centrally-collected and analyzed data. By pooling genomic, environmental, and lifestyle data from thousands of diverse individuals, small signals — previously undetectable underneath large amounts of noise — can be detected and linked to health outcomes. The goal is precision disease classification and diagnoses, where medication and healthcare are delivered at the level of the individual, customized using each patient’s unique data profile. At Omada, we share the goal of “precision population health”, and our data science team is focused on personalizing and optimizing our flagship product, Prevent®, using the tools of data science. Prevent engages participants with an evidence-based curriculum, a supportive social network, the constant guidance of a personal health coach, and digital tracking tools that include a wireless scale and mobile app — all to help reduce a participant’s risk of progressing toward obesity-related chronic disease. Throughout Prevent, we deploy predictive models to identify those participants who are at risk of gaining weight or dropping out of the program. These models are based on digitally recorded program behavior data. Have you started tracking food less frequently? Are you logging your physical activity at more random times instead of on the schedule you developed with your personal health coach? Has your weight fluctuated recently? What behaviors indicate that a retired, male participant is likely to gain back the six pounds he has already lost, and how can our team intervene at the right moment to make sure it doesn’t happen? Omada uses a three-step process to maximize the effectiveness of the program : Measure The team is equipped with a constant firehose of data, measuring engagement, participant behavior, weight loss, and other key indicators of successful behavioral interventions. Optimize Using experimental design, we run continual RCTs and A/B tests, creating a virtuous cycle of program improvement focused on maximizing the health outcomes that matter most for Prevent, including weight loss. Personalize Behavioral interventions are most effective when they are tailored to the needs of individual participants. By leveraging the power of big data and clinical rigor, we are focused on maximizing the efficacy of Prevent for every user — delivering the right interventions, at the right times, in the right ways. Here’s an example of this process in action: Each night our machine learning algorithms are trained on streaming demographic data, as well as longitudinal engagement and weight data. They spend the night assessing and predicting participants at high risk for gaining weight. By morning, these at-risk participants, randomized through our internal clinical trial management system, are surfaced to their health coaches along with specific, personalized intervention suggestions as defined by the predictive model. The effect of the suggested intervention on the participant’s outcomes are captured, studied, and used to iterate on and optimize both the prediction algorithm and suggested interventions — leading to the continual refinement and precision of our ability to keep our participants from falling off track. As we’ve scaled Prevent over the last 18 months to over 40,000 participants, we’ve amassed a data set containing tens of millions of points on everything from weigh-ins to interactions with health coaches, group members, and curriculum. We’ve assembled one of the largest data sets on behavior change in human history — and can compare all of that data against continuously collected weight-loss results. This combination allows us to determine influences on behavior — and subsequently, influences on clinically meaningful outcomes. This approach has huge implications for fighting what the Centers for Disease Control and Prevention (CDC) has labeled the public health challenge of our generation: chronic disease. There’s widespread agreement that the most effective form of tackling chronic diseases is prevention. The biggest remaining challenges are A) how to scale effective interventions to deal with a problem that affects more than one in three American adults and B) how to design interventions for populations or personality types that respond to different incentives. With the data set we’ve built — which continues to grow every day — we’ve started to discover how small changes to an interface, or tiny shifts in how a health coach interacts with a patient, can have big impacts on essential health outcomes. As our data set grows richer, so will our experiments and results. Our ultimate goal is an adaptive, personalized curriculum that optimizes weight loss and decreases average blood sugar (a1c), while reducing the greatest amount of risk for every participant. Every day, we integrate new elements that drive us towards that goal. It’s an exciting time for healthcare. And an even more critical moment for the millions of Americans at the tipping point of chronic disease. For many of these people, data science is paving the way to live longer, healthier, more fulfilling lives.
https://medium.com/omada-health/what-digital-health-learned-from-netflix-how-data-science-is-creating-self-learning-healthcare-2f0aa160c124
['Omada Health']
2016-03-21 17:47:44.090000+00:00
['Healthcare', 'Data Science', 'Health']
Why Gen Z Trusts Pornhub More than Other Brands
When it comes to advertising an adult industry, it’s going to take more than just money. Pornhub has the money, and they’ve proven this. In 2014 they purchased prime marketing real estate in New York City’s Time Square. The ad didn't have anything more explicit than a silly pun, but the fact that an adult entertainment site could afford to put up an ad earned the site some attention. Its home on Time Square was quickly replaced by another advertisement, but it demonstrated how Pornhub would have to promote itself. It would need to gather attention, generate buzz, and use controversy to its benefit. While this isn’t a formula success in traditional advertising, it translates well to the digital world; a world inhabited by Gen Z. Digital marketing is a necessity for most brands, but the brands who engage correctly are the ones who receive praise from Gen Z. Having a sense of humor is critical to Pornhub’s branding. How else could the address their content in public settings? It’s not like they’re able to post X-rated images on social media without getting censored. Rather, they try to connect with potential customers by appealing to other emotions. Whether it’s through humor or shock value, perhaps a mix of both, this type of marketing gets attention. The efforts to gain attention also requires Pornhub to destigmatize its product. I’m not going to argue over the moral implications of Pornhub’s service, rather I’ll state some facts. There will always be people who reject the site, there will be some people who admit to using the site, but most people who use the site use it in silence. Meanwhile, many people will conjure a mental image of a specific type of person when they think of a Pornhub user. Despite this image, data will support the idea that usership is not limited to specific audiences. Their marketing efforts may appeal to an older audience of a female audience who are stereotypically ignored, but Pornhub knows their efforts are worthwhile. So, what does Pornhub do when they’re forced to adapt to a new marketing climate? They use their data to paint a picture. This picture reveals many truths about society, and the data isn’t sugarcoated.
https://medium.com/swlh/why-gen-z-trusts-pornhub-more-than-other-brands-1c9e5211ef98
['Michael Beausoleil']
2019-06-18 15:48:53.840000+00:00
['Marketing', 'Generation Z', 'Branding', 'Data', 'Branding Strategy']
Handy Python Snippets To Handle Random Things
Photo by Raul Cacho Oses on Unsplash I like to keep a few python scripts handy, this is more of a cheatsheet, to search for a particular thing! Most of these are one liners, so really helpful to add into other scripts! You should be able to run them in the following way: python -c "{script}" e.g. python -c "import uuid; print(uuid.uuid1())" Generate a UUID import uuid; print(uuid.uuid1()) Create a Fernet Key
https://medium.com/beardydigital/handy-python-snippets-to-handle-random-things-2ec0a9b2d2c1
['Craig Godden-Payne']
2020-05-13 07:24:35.239000+00:00
['Software Development', 'Python']
Multi-label Text Classification using BERT – The Mighty Transformer
The past year has ushered in an exciting age for Natural Language Processing using deep neural networks. Research in the field of using pre-trained models have resulted in massive leap in state-of-the-art results for many of the NLP tasks, such as text classification, natural language inference and question-answering. Some of the key milestones have been ELMo, ULMFiT and OpenAI Transformer. All these approaches allow us to pre-train an unsupervised language model on large corpus of data such as all wikipedia articles, and then fine-tune these pre-trained models on downstream tasks. Perhaps the most exciting event of the year in this area has been the release of BERT, a multilingual transformer based model that has achieved state-of-the-art results on various NLP tasks. BERT is a bidirectional model that is based on the transformer architecture, it replaces the sequential nature of RNN (LSTM & GRU) with a much faster Attention-based approach. The model is also pre-trained on two unsupervised tasks, masked language modeling and next sentence prediction. This allows us to use a pre-trained BERT model by fine-tuning the same on downstream specific tasks such as sentiment classification, intent detection, question answering and more. Okay, so what’s this about? In this article, we will focus on application of BERT to the problem of multi-label text classification. Traditional classification task assumes that each document is assigned to one and only on class i.e. label. This is sometimes termed as multi-class classification or sometimes if the number of classes are 2, binary classification. On other hand, multi-label classification assumes that a document can simultaneously and independently assigned to multiple labels or classes. Multi-label classification has many real world applications such as categorising businesses or assigning multiple genres to a movie. In the world of customer service, this technique can be used to identify multiple intents for a customer’s email. We will use Kaggle’s Toxic Comment Classification Challenge to benchmark BERT’s performance for the multi-label text classification. In this competition we will try to build a model that will be able to determine different types of toxicity in a given text snippet. The types of toxicity i.e. toxic, severe toxic, obscene, threat, insult and identity hate will be the target labels for our model. Where do we start? Google Research recently open-sourced the tensorflow implementation of BERT and also released the following pre-trained models: BERT-Base, Uncased : 12-layer, 768-hidden, 12-heads, 110M parameters BERT-Large, Uncased : 24-layer, 1024-hidden, 16-heads, 340M parameters BERT-Base, Cased : 12-layer, 768-hidden, 12-heads , 110M parameters BERT-Large, Cased : 24-layer, 1024-hidden, 16-heads, 340M parameters BERT-Base, Multilingual Cased (New, recommended) : 104 languages, 12-layer, 768-hidden, 12-heads, 110M parameters BERT-Base, Chinese : Chinese Simplified and Traditional, 12-layer, 768-hidden, 12-heads, 110M parameters We will use the smaller Bert-Base, uncased model for this task. The Bert-Base model has 12 attention layers and all text will be converted to lowercase by the tokeniser. We are running this on an AWS p3.8xlarge EC2 instance which translates to 4 Tesla V100 GPUs with total 64 GB GPU memory. I personally prefer using PyTorch over TensorFlow, so we will use excellent PyTorch port of BERT from HuggingFace available at https://github.com/huggingface/pytorch-pretrained-BERT. We have converted the pre-trained TensorFlow checkpoints to PyTorch weights using the script provided within HuggingFace’s repo. Our implementation is heavily inspired from the run_classifier example provided in the original implementation of BERT. Data representation The data will be represented by class InputExample. text_a: text comment text_b: Not used labels: List of labels for the comment from the training data (will be empty for test data for obvious reasons) We will convert the InputExample to the feature that is understood by BERT. The feature will be represented by class InputFeatures. input_ids: list of numerical ids for the tokenised text input_mask: will be set to 1 for real tokens and 0 for the padding tokens segment_ids: for our case, this will be set to the list of ones label_ids: one-hot encoded labels for the text Tokenisation BERT-Base, uncased uses a vocabulary of 30,522 words. The processes of tokenisation involves splitting the input text into list of tokens that are available in the vocabulary. In order to deal with the words not available in the vocabulary, BERT uses a technique called BPE based WordPiece tokenisation. In this approach an out of vocabulary word is progressively split into subwords and the word is then represented by a group of subwords. Since the subwords are part of the vocabulary, we have learned representations an context for these subwords and the context of the word is simply the combination of the context of the subwords. For more details regarding this approach please refer Neural Machine Translation of Rare Words with Subword Unitshttps://arxiv.org/pdf/1508.07909. P.S. This in my opinion is as important a breakthrough as BERT itself. Model Architecture We will adapt BertForSequenceClassification class to cater for multi-label classification. The primary change here is the usage of Binary cross-entropy with logits (BCEWithLogitsLoss) loss function instead of vanilla cross-entropy loss (CrossEntropyLoss) that is used for multiclass classification. Binary cross-entropy loss allows our model to assign independent probabilities to the labels. The model summary is shows the layers of the model alongwith their dimensions. BertEmbeddings: Input embedding layer BertEncoder: The 12 BERT attention layers Classifier: Our multi-label classifier with out_features=6, each corresponding to our 6 labels Training The training loop is identical to the one provided in the original BERT implementation in run_classifier.py. We trained the model for 4 epochs with batch size of 32 and sequence length as 512, i.e. the maximum possible for the pre-trained models. The learning rate was kept to 3e-5, as recommended in the original paper. We had the opportunity to use multiple GPUs. so we wrapped the Pytorch model inside DataParallel module. This allows us to spread our training job across all the available GPUs. We did not use half precision FP16 technique as for some reason, binary crosss entropy with logits loss function did not support FP16 processing. This doesn’t really affect the end result, it simply takes a bit longer to train. Evaluation Metrics We adapted the accuracy metric function to include a threshold, which is set to 0.5 as default. For multi-label classification, a far more important metric is the ROC-AUC curve. This is also the evaluation metric for the Kaggle competition. We calculate ROC-AUC for each label separately. We also use micro-averaging on top of individual labels’ roc-auc scores. I would recommend reading this excellent blog to get a deeper insight on the roc-auc curve. Evaluation Scores We ran a few experiments with a few variations but more of less got similar results. The outcome is as listed below: Training Loss: 0.022, Validation Loss: 0.018, Validation Accuracy: 99.31% ROC-AUC scores for the individual labels: toxic: 0.9988 severe-toxic: 0.9935 obscene: 0.9988 threat: 0.9989 insult: 0.9975 identity_hate: 0.9988 Micro ROC-AUC: 0.9987 The result seems to be quite encouraging as we seems to have created a near perfect model for detecting toxicity of a text comment. Now lets see how we score against the Kaggle leaderboard. Kaggle result We ran inference logic on the test dataset provided by Kaggle and submitted the results to the competition. The following was the outcome: We scored 0.9863 roc-auc which landed us within top 10% of the competition. To put this result into perspective, this Kaggle competition had a price money of $35000 and the 1st prize winning score is 0.9885. The top scores are achieved by teams of dedicated and highly skilled data scientists and practitioners. They use various techniques as such ensembling, data augmentation and test-time augmentation in addition to what we have done so far. Conclusion and Next Steps We have tried to implement the multi-label classification model using the almighty BERT pre-trained model. As we have shown the outcome is really state-of-the-art on a well-known published dataset. We were able to build a world class model that can be used in production for various industries, especially in customer service. For us, the next step will be to fine tune the pre-trained language models by using the text corpus of the downstream task using the masked language model and next sentence prediction tasks. This will be an unsupervised task and hopefully will allow the model to learn some of our custom context and terminologies. This is similar technique used by ULMFiT. I will share The outcome in another blog so do watch out for it. I have shared most of the code for this implementation in the code gist. However I will merge my changes back to HuggingFace’s github repo. I would encourage you all to implement this technique on your own custom datasets and would love to hear some stories. I would love to hear back from all. Also please feel free to contact me using LinkedIn or Twitter.
https://medium.com/huggingface/multi-label-text-classification-using-bert-the-mighty-transformer-69714fa3fb3d
['Kaushal Trivedi']
2019-02-13 22:22:25.299000+00:00
['Machine Learning', 'Bert', 'NLP', 'AI', 'Deep Learning']
Take a sick day
Our tools and technology make it so easy to work remotely instead of taking a genuine sick day. But just because you can do something does not always mean that you should. Last year I fell into a bad habit of working from home when I was sick. Rather than resting and giving my body and mind the space to heal, I would continue to work. Looking back, this was a poor choice as often the work I did was of lower quality and the work still came at a physical and mental cost. My thinking has not stopped there though. I think a reluctance to take true sick leave reveals some deeper issues. Switching off is hard In an age of phone notifications and instant messaging it can be extremely hard to switch off from work when at home. I also get to work on some incredible things and I find myself wanting to think about them all the time. Not being able to switch off has negative effects on home life so taking a sick day is worth doing just for that! Part of being mentally able to take sick days is practising switching off from work while at home. Historically I totally suck at switching off from work. However, it is something I am trying to get better at. Things I am trying include: Not having twitter on my phone (I have already removed Facebook and Messenger) Redirecting Github and Bitbucket notifications to my work email Not getting work emails on my phone Turning on strict notification snoozing for Slack (only allow notifications from 9am — 5pm) Not getting notifications from other work-related tools on my phone Mental health and physical health (I am not a healthcare professional. These are just my subjective experience and observations.) When I am stressed, tired, and overwhelmed I am more susceptible to getting sick and staying sick for longer. While working from home might be great at forcing you to physically rest (and avoid contaminating your teammates), it might not do anything to address the underlying stress and tiredness that could have allowed your sickness to take hold in the first place. This can be a bit tricky when a deadline is one of the things causing you stress because taking sick leave can exacerbate that. I have no good answer for this. I can feel an answer forming, but it exposes some uncomfortable personal truths… Culture When we adopt a work-related behaviour we are casting a culture vote for it. The more people work from home when sick, the more normalised it becomes. This has a flow-on effect to how we plan work. If we come to expect that people will continue to work while ill, we create timelines and estimates that don’t account for possible illnesses — which in turn makes it harder to take genuine sick leave. The question is: what kind of culture do you want to foster? Self-worth Ok: this one cuts deep. As a software engineer, there are so many measures by which you can define your personal worth: lines of code added/removed bugs closed education GitHub stars Twitter followers conference talk count salary (the list goes on) (source: my awesome tweet🤘- follow me! oh wait…) These measures have their basis in your personal output and achievements. Taking a sick day can often cost you achieving something. Conceptually, if you are able to tap into a formula for personal worth outside of what you achieve, it might be easier to step away from work for a bit. This is a huge topic that I won’t explore further here — but worth thinking about. Some questions this raises for me, are: How much do you trust your team to fill your role while you’re away? What is your attitude towards code ownership? Final words These thoughts are an offshoot of some personal reflections on work-life balance I have been having lately. I hope you have found them insightful. Not using sick leave correctly costs yourself and the people around you. So, if you’re sick, don’t work from home — rest from home.
https://medium.com/smells-like-team-spirit/protect-your-work-life-balance-take-a-sick-day-8a89f30549d5
['Alex Reardon']
2019-03-01 00:19:22.752000+00:00
['Team Culture', 'Work Life Balance', 'Software Development', 'Productivity', 'Sick Day']
Deploy a Full Stack Application With No Code
This blog was written in collaboration with Rohan Shiva Introduction Creating a nice, clean, polished dashboard is a skill every programmer wants to have. Being able to translate raw data into a visual that’s easy to understand is a hard task. Plus, most dashboards require knowledge of HTML, CSS, and JavaScript, along with inherent UI design skills. In an effort to make this process easier, we created a framework for making a shareable dashboard for proof-of-concept use cases. Let’s look at the specific technologies in play. Streamlit Streamlit is a Python framework that makes creating dashboards easier than ever. It lets you create a full, interactive dashboard without ever touching HTML or JS. Streamlit internally handles all of the front-end code and reduces complex structures to one-line functions. This allows everyone to create impressive dashboards with only a few lines of Python. We used Streamlit for the front-end aspect of the Dashboard. TigerGraph TigerGraph is the fastest graph database platform in the world. With TigerGraph, you can create graphs with billions of nodes and edges that operate in realtime. TigerGraph also makes use of GSQL, their own graph query language that mimics SQL syntax and allows for quick and easy querying of graph data. We used TigerGraph to store our data. Specifically, we used the Covid-19 Starter Kit from TigerGraph, which has Covid-19 data from the Korean Centers for Disease Control & Prevention (KCDC), Google Colab Google Colab is a free, cloud-based, Jupyter notebook environment. With Colab, you can create Python scripts without having to worry about disk space and RAM on your local machine. Additionally, like any file stored in Google Drive, Colab notebooks are completely sharable. We used Google Colab to make the dashboard shareable and allow each user to have their own copy. Now, let’s walk through the process of launching the dashboard. The entire process takes about 15 minutes and no requires no code, making it accessible to everyone. Outline Creating a TigerGraph Cloud Account Choose Covid-19 Starter Kit Loading Data Into the Graph Creating the Streamlit Dashboard Running the Google Colab Notebook Exploring your dashboard 1. Creating a TigerGraph Cloud Account The first step is creating a TigerGraph Cloud account. Head over to the TigerGraph signup page and follow the steps provided. You can also watch this demo if you get lost along the way. Creating TG Cloud Account 2. Choose Covid-19 Starter Kit Once your account has been created, you need to create a graph database. Start by clicking the My Solutions button and choose the Covid-19 Analysis starter kit. Creating a TigerGraph Solution 3. Loading Data Into the Graph Once it’s ready, load GraphStudio and click the Load Data tab. Hit the play button, and the data will start loading. Loading Data Once done, the graph is now good to go. There are lots of other tabs that let you explore the graph and the data. There are also a number of premade queries that show off the native GSQL used by TigerGraph. Other graph functions We’ll skip over these features for now, and move on to creating the visualization. 4. Creating the Streamlit Dashboard Luckily, the script to create the streamlit dashboard is already written. If you want to see how the dashboard was made, check out this blog by Rohan Shiva. Next, continue to the premade Google Colab notebook. 5. Running the Google Colab Notebook When you start up the notebook, you’ll come across a form asking for the information for your graph instance. Entering Information Once you enter your information, hit the Launch Dashboard button. This will start the loading process. The program will take about 2 minutes to load. Loading Animation (Takes a couple of minutes) You may be wondering, what’s happening behind the scenes? The code first installs the pyTigerGraph Python package. This package simplifies the TigerGraph connection process and lets you call TigerGraph functions in Python The code then grabs other packages like Streamlit Next, it grabs the script for the Streamlit dashboard and the corresponding query that extracts information from the graph. Then, using pyTigerGraph, the query is uploaded to the graph Finally, the code configures a secure tunnel using ngrok to deploy the streamlit app. Normally streamlit is deployed locally, but the localhost can’t be accessed from Google Colab (since it’s a cloud environment). Thus, we use ngrok to link the localhost to a live website. 6. Exploring your Dashboard Once the loading has taken place, the dashboard is good to go. Hit the link provided, and you’ll be brought to a webpage with the Streamlit visualization. You now have a full dashboard app at your disposal! Dashboard Demo Conclusion We presented a general framework for creating readily-deployable, premade dashboards. The entire process takes about 15 minutes and requires no code, making it accessible to everyone. This process can be applied to other technologies as well, as long as the 3 main components are met: Again, shoutout to Rohan Shiva for helping write this blog.
https://medium.com/swlh/deploy-a-full-stack-application-with-no-code-180c4d4e6fc8
['Akash Kaul']
2020-09-08 19:32:47.313000+00:00
['Dashboard', 'Coding', 'Streamlit', 'Graph', 'Google Colab']
Javier Colon
Giving back and doing the right thing has always been a special part of my life. When given the chance to ask Javier one question in the flurry of events after his win on The Voice I decided to ask what charity projects he is working on. Here is his answer: Q: I was just wondering if there are any charities that you plan to support now or that you want to support in the future? A: Absolutely. My mother and father are actually both cancer survivors. So with that we have done a lot of charities in the past. We do some golf tournaments and things like that that support a lot of charities all over the country and some all over the world. And I plan on definitely supporting charities like the Susan G. Komen Foundation as well as Jimmy Lee that is based out of North Carolina that gives millions of dollars to research through their events that they hold every year. As you know, I’m a huge family guy and we might try to do some charities that are also very family based as well.
https://medium.com/a-teen-view/javier-colon-7affcc32001e
['Arin Segal']
2016-11-04 00:42:07.001000+00:00
['Javier', 'Colon', 'Music']
The Vignelli Subway Map is an Effective User Design
Emphasis on text, numbers and data points will lead the audience to the information we want them to know. Cole Knaflic’s book, Storytelling with Data outlines the following principles for effective visualizations: Understand the context of what you need to communicate. NYC subway riders needed to know how to navigate the system from what line to use and how to get to the platform. This also needed to be done in a consistent manner. 2. Choose an appropriate display to visualize it. Subway riders needed visuals to help them understand where they were within in the station as well as along the subway route. There are various ways to display this information via maps, diagrams and text. 3. Eliminate clutter. The signage used in the subway system prior to the 1970’s were in disarray and had inconsistent lettering and positioning. The subway maps had a lot of extra information that most likely confused subway riders and slowed down the processing of the information. 4. Draw attention where you want your audience to focus. Subway riders first and for most want to know where they are and where they need to go. The use of color, arrows and lettering are a few of the ways this begins to help as they enter a station — possibly for the first time. 5. Choose a visual display that will allow your audience to do what you need them to do. There are various displays that help subway riders navigate the system. It’s important that as someone enters the system who may not be familiar to be able to identify and understand at first glance where to look for navigating the station, subway line information and service changes.
https://medium.com/nightingale/making-the-nyc-subway-user-friendly-through-effective-visuals-22d3da2be649
['Allen Hillery']
2020-09-13 03:05:48.447000+00:00
['Design', 'Dvhistory', 'New York City', 'Maps', 'Data Visualization']
Flutter Future Tutorial — Asynchronous Dart Programming
Asynchronous programming is very important for mobile application development. In Flutter framework, there are 2 ways to do asynchronous programming. Using Future Using Streams In this tutorial we will learn how to do asynchronous programming using Future Futures: Dart is a single-threaded programming language Future<T> object represents result of asynchronous operation which produces a result of type T . If the result is not usable value, then the future’s type is Future<void> A Future represents a single value either a data or an error asynchronously There are 2 ways to handle Futures: Using the Future API Using the async and await operations Let’s write a program and see the output: Future delayedPrint(int seconds, String msg) { final duration = Duration(seconds: seconds); return Future.delayed(duration).then((value) => msg); } main() { print('Life'); delayedPrint(2, "Is").then((status) { print(status); }).catchError((err) => print(err)); print('Good'); } Output: Life Good Is Here we defined a delayedPrint function which is a mocked asynchronous operation and sends a value after the seconds reached. Here we also used the Future API’s then method. So when the program runs, we see 'Is' is printed after all the normal operations. To learn Dart programming language and Flutter framework check out my new course with a discount on Udemy: Mastering Dart Programming | For Flutter Developers But sometimes what we need, after completion of one operation and the data it returns, we want to do something else. In that case we can use asynchronous operation in synchronous fashion. Now let’s run the following program and see the output: Future delayedPrint(int seconds, String msg) { final duration = Duration(seconds: seconds); return Future.delayed(duration).then((value) => msg); } main() async { print('Life'); await delayedPrint(2, "Is").then((status){ print(status); }); print('Good'); } Output: Life Is Good In this case, we used async and await keywords. await basically holds the control flow, until the operation completes. To use await within a function, we have to mark the function by async keyword. Which means, this function is an asynchronous function. Source: My Blog
https://medium.com/level-up-programming/flutter-future-tutorial-asynchronous-dart-programming-a0c989c0f612
['Mahmud Ahsan']
2020-10-20 06:01:40.324000+00:00
['Async', 'Await', 'Flutter', 'Dart', 'Future']
Research Says It Doesn’t Matter How Fast You Walk; Just Walk
Research Says It Doesn’t Matter How Fast You Walk; Just Walk Total steps matter more than intensity for mortality Photo by Danny Howe on Unsplash There are many reasons I love walking. For one, it’s easy. Almost everyone can lace up a pair of shoes and walk outside (or inside). Heck, all the movement we do in a day, including chores, running errands, and walking to the office (if that’s still a thing), add up to the total number of steps we complete each day. Walking can also improve your mood, help you destress, provide heart-pumping exercise, and even help the waistline. The American Heart Association recommends getting at least 150 minutes per week of moderate-intensity aerobic activity or 75 minutes per week of vigorous aerobic activity, or a combination of both preferably spread throughout the week. They also advise spending less time sitting. Even light-intensity activity can offset some of the risks of being sedentary. Prolonged sitting (defined as sitting for more than 2 hours) is linked to an increased risk of obesity and type 2 diabetes. While fitness trackers and smartwatches almost always set a daily target of 10,000 steps, there’s limited scientific evidence about whether 10,000 is some “magic” number. In fact, 10,000 steps began as a marketing campaign by a Japanese pedometer company. Authors of a March 2020 study published in the Journal of the American Heart Association acknowledged that higher step counts are associated with lower mortality. Still, previous studies were conducted in older adults, in individuals with debilitating chronic conditions, or in cohorts with relatively few deaths, which may limit generalizing that theory to the majority of the population. For this reason, they set out to examine the association between step count, intensity, and risk of death in a broader range of the U.S. population. They used data on physical activity collected by a national health survey, the National Health and Nutrition Examination Survey (NHANES), between 2003–2006. The research team studied participants 40 and older who wore an accelerometer during their walks for one week. An accelerometer measures not just steps but how fast you take them (steps per minute). Then they followed those subjects for the next decade, specifically tracking how many died and the cause of death. According to the National Institutes of Health, During the decade of follow-up, 1,165 out of the 4,840 participants died from any cause. Of these, 406 died from heart disease and 283 died of cancer. Compared with people who took 4,000 steps a day, those who took 8,000 steps a day at the start of the study had a 50% lower risk of dying from any cause during follow-up. People who took 12,000 steps a day had a 65% lower risk of dying than those who took only 4,000. Higher step counts were also associated with lower rates of death from heart disease and cancer. These benefits were consistent across age, sex, and race groups. It’s important to add no one has proven cause and effect — only that walking more seems to go hand in hand with living longer. It could be that people who walk more steps eat better, exercise more, and avoid things like smoking and excess alcohol use. First author Dr. Pedro Saint-Maurice of NCI explained, We wanted to investigate this question to provide new insights that could help people better understand the health implications of the step counts they get from fitness trackers and phone apps. What’s interesting is step intensity did not seem to impact the risk of mortality once the total number of steps per day was considered. As far as this study is concerned, it’s more important to focus on total steps vs. how fast you take those steps.
https://medium.com/in-fitness-and-in-health/research-says-it-doesnt-matter-how-fast-you-walk-just-walk-dc01ddfd0f98
['Suzie Glassman']
2020-12-18 21:07:22.342000+00:00
['Life Lessons', 'Health', 'Life', 'Self', 'Fitness']
Self-Taught Developer In Designing The Best Portfolio
“Nothing behind me, everything ahead of me, as is ever so on the road.” Let me first tell you a little bit about my experience when I started working on my first Developer job because I wanted you to know more about how it is in real life before you can start designing your portfolio. As a Self-Taught Developer we didn’t have anything yet to offer, sure personal projects, good design resume is the best way to do it but let me tell you, at this period, everyone already knows that you can easily get the source code, you can easily copy other people’s work, and at this period, the competition is getting harder and harder, competing with Engineers and Computer Science graduate is overwhelming, so maybe we can do different approach? We need something more to offer, we need to give them something they haven’t yet find, we need to think differently, let me share with you how I created mine. Starting Web Development literally got me out of my comfort zone, the moment I decided to pursue the journey, also means there’s no turning back, it was my one-way ticket so whatever actions and decision I’ll choose, from that moment forward will determine my future so I thought I needed to be cautious and at the same time carving my way through it, I don’t know if it will work, I don’t know which direction to go, I don’t know who to follow, all I can think about though is that I need to make it happen if I want to live the life I wanted to create. “Do or do not, there is no try.” — Yoda I thought it’s not just about luck, I grew up knowing that to have something you have to go out and get it, the world doesn’t owe anyone anything, people who were born without a silver spoon were born to create one, to provide one, you may not be born on a millionaire or a billionaire family probably because you are meant to be the first one, as they say, you can’t go back and change the beginning but you can start where you are and change the ending, that’s probably just how the universe work, right? I don’t have an option at that time, it was like driving without knowing which road to take when to turn, and which part to stop over, all I know is that someone got there, someone had able to make it work, all I know that it’s possible and all I know is that right there, the finish line, the top, is what I want to have, that’s how I want my life to be designed, to become. My journey in Web Development was crazy, from Day 1 up until today, every day is like a battlefield, working with other 8 computer science graduate is the scariest, it felt like I was thrown in a pack full of hungry wolves, it was one of the scariest chapters of my life. To be honest, there was no day I didn’t doubt myself, there’s no moment where I wanted to quit. The first quarter of my career as a developer was full of excitement, experiencing Web Development in real life is so fulfilling knowing that all my hard work paid off, a dream come true indeed, months after months of constant extreme hard work, but one thing is for sure, I’ll do it over again just to hear them calling me a developer, an engineer. Getting the first paycheck felt like winning the jackpot, and I will forever be thankful, to God, to my boss who took a chance on me even when I don’t have anything yet to offer other than my willingness and my courage to do everything I can to become one of them, and also to my past self, Salute! who didn’t hit the send button even when I was crying for help debugging while having my resignation letter on the side, it was crazy feeling ever but I wouldn’t trade it for anything, the life of a Self-Taught Developer is as crazy as hell and as rewarding as a champ. The Technical part “For Web Developers, your website is more than just a description of your work — it is your work. It’s a place where you can demonstrate what you’re capable of.” Before you start sending out your job application, build first your portfolio, it represents your technical skills. Your portfolio is like your ticket to the concert, it’s your ticket to the other side, your ticket to get on the world of Web Development, you gotta do what you gotta do, build it. Here are some tips. 1 — Keep it simple Let your output or project speak for itself. This is your chance to show them what you are capable of doing, show them your stuff, this is you, this is your eye bags, your sleepless nights, your hard work, try to make it as simple as possible, a good design is a simple design, with just one look, one transition, one-click, if that’s what the user wants, if that’s where the user wants to go, that’s it. Good User Interface, Good User Experience. 2 — One of the goal when building application is the User Experience Easy read, easy use, easy to navigate. Another thing to consider is that when you build your portfolio website, try to include the design for several screen devices since the business is targeting user’s mobile phones nowadays, applying the mobile-first approach applications is a must, give a good impression by simply showing them that your work is not just for the website, but mobile-friendly too. “We love opening your website and then immediately adjusting the browser window width back and forth” — Freecodecamp Always make the user experience a 100%, besides, that is why the applications are built in the first place, you just didn’t leave a good impression, you also now have leveled up your career is a Web Developer. 3 — Choose wisely When it comes to building up your portfolio, make sure to only include the projects that matter. Quantity doesn’t assure in getting the job, qualities do. The reality is that the recruiters don’t spend so much item since they will be interviewing a lot of applicants so basically, they are just going to spend 15–30 seconds browsing your project, so with that in mind, make sure to only include the very best, make sure to take advantage on every second you got. When considering what projects to include, here are some aspects to know when choosing. What kind of project that I know will make me proud, ask yourself the question, if I can only have one project in my life that I can show off to anyone, what kind of project would it be? When choosing which one to highlight, ask yourself what makes this project different from other projects I built? If things aren’t working yet after several months, make a review, ask for feedback from your friends, or even strangers, and do the necessary changes. 4 — Show your Personality This is where my magic happened, and we will be further discussed and elaborate below some non-technical aspects that helped me get my first job. Even when we are working in the technology industry, the companies are hiring living, breathing that has personality and has its character, so basically, they are not hiring just for skills, rather, they want to get the full package. When I was on an interview on one of my current job today, one of the super boss asked me if I had an experience in working in an office-based since I spent years working remotely, then I said yes, then he explained why he ever asked, he said, we are not hiring a machine, so we need to know more about you, as an assurance that we are not in danger, then he laughed, I still wasn’t hired so I laughed as well, it was a mandatory laugh. Here are some good resources you can use when building your portfolio 1- Developer Portfolio Tips From the one and only, Brad Traversy 2 — Gold Tips and Guides on building the Best Portfolio by FreeCodeCamp 3 — And talking about building a Responsive or Mobile First Approached Website portfolio, then here it is from Julio The Non-Technical Part When nothing seems to work out even after several weeks or months, it probably is time to do some self-reflection, you are probably asking the wrong questions, that’s why you are getting the wrong results. Before you start sending out your resume again, do yourself a favor and re-assess the situation, I’ll assume that you have sent some copy already to different companies, and probably got no response or got a “We will get back to you”, the most common line, I probably got almost a hundred of those, I lose count. Because of my desperation and desire to get that first yes, I was uncontrollable, I got so obsessed into getting that “yes” out of all the punches thrown at me, that I wasn’t thinking logically, my judgment was clouded and I forgot to reassess the situation, there’s a clear slap-in-my-face reality that my approach wasn’t working and I refuse to look at it, I refused to accept it, so instead of stepping back I kept giving myself all in, it wasn’t about having a bad portfolio or not enough personal project, it was about me not accepting the truth that I couldn’t beat those that have the proper degree, it was me who is so mad of not getting it even after all my efforts, I still can remember myself asking the question why can’t they see me? was it because of not having enough experience? was it because I am a woman? was it because I didn’t have a Computer Science degree? was it because I am Asian? was it because of my age? I have all those questions, and then I realized that yes, the answers are yes. Reality-check, all yes. not having enough experience? It’s not because you don’t have enough, it’s because you don’t have any! because I’m a woman? Yes, you are a woman, and you are competing with a man who is an engineer in the field of engineers. because I didn’t have a Computer Science degree? Well absolutely, would you want to consult your health into an accountant? because I’m Asian? probably, when you are competing with a young white genius dude with a powerful gaming laptop. was it because of my age? maybe, some preferred fresh graduate. (I was 26 years old when I started!) So yes, I am the problem, I am sabotaging myself, I am not getting it. So, I stopped everything I was doing, I step back, I did a lot of thinking, and all I can come up at that time was that “who am I to complain?” I have nothing to lose, and everything to gain, so why am I complaining? Of course, it will not be easy, because if its easy anyone can do it, it will hard for sure! the only question should be what are you gonna do about it? Do you have what it takes? So I quit complaining and start re-assessing, experimenting, researching, and evaluating what works and what doesn’t work, I changed a lot of details on my resume, and here are some tips that might help. (hopefully! :] ) So I sent out a lot of resumes, I search for several job hiring websites, I fixed my profiles, use a better picture, a professional one, I made my resume simple, direct to the point, I eliminate a lot of unnecessary details that won’t do any help. and one important thing, my intention is not to get the job but to find out what works and what doesn’t. I rebuild my resume several times, minimize it, learned, and repeat. 1 — Include as little as possible with your non-tech experience — especially if they are of no use, but include things about leadership, teamwork, perseverance, managing conflict with colleagues, handling pressure and your approach in handling problems. 2 — Add things about how you are going to become a developer, how are you going to change your life, what are your plans in the next 3–5 or even 10 years. 3 — What are the things or sacrifices you are willing to do to make it happen, and what can you do to make up for the years that have lost 4 — What can you do to the company, your experience while building your portfolio 5 — How was your Bootcamp experience and do you have any upcoming Bootcamp to enroll? 6 — Do not overshare your weaknesses, and if you do, if its mention then make a quick turn around, do a follow-through, what are the necessary actions or precautions to make up for it? Example: Do you have experience in Database? Follow-through: “I enrolled a Bootcamp about the Database, I do know the basics, I created some scripts for the MySQL or in Relational Database, I had able to design a database for my small project, I understand how it works in the Web, it’s importance, I know how it works well in the frontend, I am not more of the backend side, but I understand how it should work or connect to the Frontend” that interview got me my first job. Do whatever you can to get yourself in front of the others, think ahead, plan and give them something that they can look forward, sometimes it’s not just about what the developer has done, sometimes giving them what they want to hear, and giving them some solutions to their plans or project’s problem, giving them the idea that you can offer something can turn the tables. That’s all we can give anyways. Hard work beats talent when talent doesn’t work hard, I’m a huge believer, I’ve done this several times, consistency, perseverance, drive and to simply keep going, keep showing up, failures will soon run out, the best way to not fail is to never give up, ever, you’ll get your chance, and it will be the sweetest victory. Thanks for reading!
https://medium.com/for-self-taught-developers/self-taught-developer-lets-get-that-developer-job-3-4-designing-the-best-portfolio-fe45055541
['Ann Adaya']
2020-10-13 13:22:57.165000+00:00
['Web Development', 'Freelance', 'Software Development', 'Programming', 'Software Engineering']
New Year, New You, New Data
New Year, New You, New Data Can dataviz help us achieve our resolutions? The end of 2020 is nearly upon us, which means it’s time for…New Year’s Resolutions! Are you the kind of person who makes resolutions? Do you stick with them? And, most importantly, do you use dataviz to help you out? We’re planning a Nightingale theme week later in 2021 that will be focused on personal data collection and visualization, and we want you to write all about it. How do you use dataviz to help achieve your goals or encourage self-reflection? Do you turn your data into data art, à la Dear Data? Do you use bullet journals? Do you think in terms of Objectives and Key Results, like Cole Nussbaumer Knaflic (see Episode 13 of her Storytelling With Data podcast)? How does dataviz help you, personally? To give you enough time to collect personal data [Ed. note to self: New Year’s Resolution — start bullet journaling], we’re not planning to run the theme week until the spring, so watch for a call for submissions in a few months. In the meantime, good luck to all those setting resolutions! While we’re on the subject of self-evaluation, perhaps you’re wondering why Nightingale is so fond of theme weeks. First, we see more new writer submissions when we provide a topic. The other answer is, well, we’ve looked at the data. Readership and engagement go up, which means more people are learning from the insights you, our writers, choose to share. For example, during our recent Data Sensification Theme Fortnight, Nightingale daily site visits got as much as an 18 percent boost over typical daily averages. Our daily target is 2,500 site visits and we saw five days above that target during this theme. Kudos to those of you who contributed! Writers, you can help drive engagement by selecting strong lead illustrations, considering Medium’s tips for your titles, and setting subtitles that explain why people should read your article. Happy New Year to all! The ‘Gale will be back in your inbox in January. Trivia Question from the last issue of The ‘Gale: What everyday activity uses all five senses? Answer: Eating! You see your food, smell it, touch it (at least with your mouth), and taste it, all while hearing it go “crunch” or “slosh.” For the next question, let’s see if you were paying attention during our Data Sensification theme fortnight: What common household item did one of our writers use to demonstrate scale? Was it a) a plastic turtle, b) a LEGO brick, c) a sprinkle, or d) a Big Mac? Look for the correct answer in our next issue. In the Wild Fun visualizations and dataviz-adjacent content from around the interweb: A visualization from The Ideas Report that explores how COVID-19 affected the creativity of 35,000 people. Visualization from Gabrielle Merite, source: https://www.gabriellemerite.com Infographic from Reuters’ “World’s biggest iceberg heads for disaster,” source: https://graphics.reuters.com/CLIMATE-CHANGE/ICEBERG/yzdvxjrbzvx/index.html Scarily accurate pie-chart humor from Ann Friedman’s collection, source: https://www.annfriedman.com/piecharts/ Daily chart from The Economist confirming what many of us already know :), source: https://www.economist.com/graphic-detail/2020/11/27/playing-video-games-in-lockdown-can-be-good-for-mental-health In Case You Missed It My Plastic Footprint: a Physical Data Visualisation Project When graphs don’t elicit the empathy a topic deserves, what can you do? Kat Greenbrook explains that she turned to data physicalization as a way to communicate the marine lives lost because of one individual’s — her own — annual plastic waste. In 2019, she kept and recorded every piece of plastic she would have thrown away. This project is the result. Turtles made from an individual household’s plastic waste. Engaging with the Senses through Data: Spotlight on Brian Foo Data visualization artist Brian Foo spends a lot of time thinking about “non-visual data visualization.” In this interview, Jennifer Li chats with him about the advantages of communicating data via other senses and his work with libraries and museums to make public information more visible and accessible. Navigating the Green Book is an interactive experience that allows visitors to visualize a trip using the Green Book. More from Nightingale What Gordon Ramsay Taught Me About Data Visualization What If You Could Touch Data? Telling Stories With Data & Music How to Create Wonder with Data and a Physical Object Building Physical Maps…with LEGO Who Is Your Chart For? Positive Numbers: Data, Journalism, Science, and Research, with Beyond Words Six Chart Design Lessons from Visualizations of COVID-19
https://medium.com/nightingale/new-year-new-you-new-data-b78d89fe6271
['Claire Santoro']
2020-12-17 18:04:36.723000+00:00
['Newsletter', 'Data Physicalization', 'Data Visualization', 'New Years Resolutions', 'Dataviz']
Le Dev’hackers Day : une journée workshop sur les services AWS!
in In Fitness And In Health
https://medium.com/darkmirafr/le-devhackers-day-une-journ%C3%A9e-workshop-sur-les-services-aws-bb41fac2b95c
['Cyrille Grandval']
2020-02-12 17:15:49.972000+00:00
['Aws Cloudfront', 'Darkmira', 'AWS', 'Aws S3', 'Workshop']
The main pillars of learning programming — and why beginners should master them.
I have been programming for more than 20 years. During that time, I’ve had the pleasure to work with many people, from whom I learned a lot. I’ve also worked with many students, coming fresh from university, with whom I had to take on the role of a teacher or mentor. Lately, I have been involved as a trainer in a program that teaches coding to absolute beginners. Learning how to program is hard. I often find that university courses and bootcamps miss important aspects of programming and take poor approaches to teaching rookies. I want to share the five basic pillars I believe a successful programming course should build upon. As always, I am addressing the context of mainstream web applications. A rookie’s goal is to master the fundamentals of programming and to understand the importance of libraries and frameworks. Advanced topics such as the cloud, operations in general, or build tools should not be part of the curriculum. I am also skeptical when it comes to Design Patterns. They presume experience that beginners never have. So let’s look at where new programmers should start. Test-Driven Development (TDD) TDD brings a lot of benefits. Unfortunately, it is an advanced topic that beginners are not entirely ready for. Beginners shouldn’t write tests. This would be too much for their basic skill levels. Instead, they should learn how to use and work with tests. Each programming course should center around exercises. I extend my exercises with unit tests and provide the students an environment which is already setup for running those tests. All the students have to do is write their code and then watch the lights of the testrunner turning from red to green. The resulting gamification is a nice side effect. For example: If the selected technology is Spring, I provide the exercises and tests within a Spring project. The students don’t need to know anything about Spring. All they need to know is the location of the exercises and the button to trigger the tests. Additionally, students must know how to use a debugger and have a Read-Eval-Print Loop (REPL) handy. The ability to analyse code during runtime and to have a playground for small experiments is essential in TDD. The main point is to ensure students don’t have to learn basic TDD behaviours after they’ve acquired core programming skills. Changing habits later in the students’ career will be much harder than learning those habits now. That’s why they should live and breath unit tests from the beginning. Later in their professional life, they should have an antipathy for projects without unit tests. They should intuitively see the absence of unit tests as anti-pattern. Fundamentals First I hear very often that rookies should immediately start with a framework. This is like teaching people how to drive by placing them in a rally car and asking them to avoid oversteering. This simply ignores the fact that they still mistake the brake for the throttle. The same applies when we start students with a framework like Angular. Beginners need to understand the fundamentals of programming first. They need to be familiar with the basic elements and what it means to write code before they can use somebody else’s. The concept of a function, a variable, a condition, and a loop are completely alien to novices. These four elements build the foundations of programming. Everything a program is made of relies on them. Students are hearing these concepts for the very first time, but it is of the utmost importance that the students become proficient with them. If students do not master the fundamentals, everything that follows looks like magic and leads to confusion and frustration. Teachers should spend more time on these fundamentals. But, sadly, many move on far too quickly. The problem is that some teachers struggle to put themselves into the role of a student. They have been programming for ages and have forgotten what types of problems a beginner has to deal with. It is quite similar to a professional rally driver. He can’t imagine that somebody needs to think before braking. He just does it automatically. I design my exercises so that they are challenging but solvable in a reasonable amount of time by using a combination of the four main elements. A good example is a converter for Roman and Arabic numbers. This challenge requires patience from the students. Once they successfully apply the four elements to solve the challenge, they also get a big boost in motivation. Fundamentals are important. Don’t move on until they are settled. Libraries and Frameworks After students spend a lot of time coding, they must learn that most code already exists in the form of a library or a framework. This is more a mindset than a pattern. As I have written before: Modern developers know and pick the right library. They don’t spend hours writing a buggy version on their own. To make that mindset transition a success, the examples from the “fundamentals phase” should be solvable by using well-known libraries like Moment.js, Jackson, Lodash, or Apache Commons. This way, students will immediately understand the value of libraries. They crunched their heads around those complicated problems. Now they discover that a library solves the exercise in no time. Similar to TDD, students should become suspicious when colleagues brag about their self-made state management library that makes Redux unnecessary. When it comes to frameworks, students will have no problem understanding the importance once they understand the usefulness of libraries. Depending on the course’s timeframe, it may be hard to devote time to frameworks. But as I already pointed out, the most important aspect is shifting the mindset of the student away from programming everything from scratch to exploring and using libraries. I did not add tools to this pillar, since they are only of use to experienced developers. At this early stage, students do not need to learn how to integrate and configure tools. Master & Apprentice In my early 20s I wanted to learn to play the piano. I did not want a teacher, and thought I could learn it by myself. Five years later, I consulted a professional tutor. Well, what can I say? I’ve learned more in 1 month than during the five years before. My piano teacher pointed out errors in my playing I couldn’t hear and made me aware of interpretational things I never would have imagined. After all, she instilled in me the mindset for music and art, both of which were out of reach for me as a technical person. It is the same in programming. If somebody has no experience in programming, then self-study can be a bad idea. Although there are many success stories, I question the efficiency of doing it alone. Instead, there should be a “master & apprentice” relationship. In the beginning, the master gives rules the apprentice must follow — blindly! The master may explain the rules, but usually the reasoning is beyond the apprentice’s understanding. These internalised rules form a kind of safety net. If one gets lost, one always has some safe ground to return to. Teaching should not be a monologue. The master has to deal with each student individually. He should check how the students work, give advice, and adapt the speed of the course to their progress. Once the apprentices reach a certain level of mastery, they should be encouraged to explore new territory. The master evolves into a mentor who shares “wisdom” and is open for discussions. Challenge and Motivation “Let’s create a Facebook clone!” This doesn’t come from a CEO backed by a horde of senior software developers and a multi-million euro budget. It is an exercise from an introductory course for programmers. Such an undertaking is virtually impossible. Even worse, students are put into wonderland and deluded into believing they have skills that are truly beyond their reach. No doubt the teacher is aware of that, but creates such exercises for motivational reasons. The main goal of an exercise is not to entertain. It should be created around a particular technique and should help the students understand that technique. Motivation is good, but not at the sacrifice of content. Programming is not easy. If the students don’t have an intrinsic motivation, coding might not be the way to go. Newbies should experience what it means to be a professional developer. They should know what awaits them before they invest lots of hours. For example, many business applications center around complex forms and grids. Creating these is an important skill that exercises can impart. Building an application similar to Facebook might not be the best lesson for students to learn right away. Similarly, a non-programmer might be surprised at how few code lines a developer writes per day. There are even times where we remove code or achieve nothing. Why? Because things go wrong all the time. We spend endless hours fixing some extremely strange bugs that turn out to be a simple typo. Some tool might not be working just because a library got a minor version upgrade. Or the system crashes because somebody forgot to add a file to git. The list can go on and on. Students should enjoy these experiences. An exercise targeting an unknown library under time pressure might be exactly the right thing. ;) The sun isn’t always shining in real life. Beginners should be well-prepared for the reality of programming. Final Advice Last but not least: One cannot become a professional programmer in two weeks, two months or even a year. It takes time and patience. Trainers should not rush or make false promises. They should focus on whether students understand the concepts and not move on too fast.
https://medium.com/free-code-camp/the-main-pillars-of-learning-programming-and-why-beginners-should-master-them-e04245c17c56
['Rainer Hahnekamp']
2018-06-20 16:14:25.434000+00:00
['Technology', 'Software Engineering', 'Software Development', 'Learning', 'Programming']
Top 5 ways to get more traffic to your website.
Top 5 ways to get more traffic to your website. ‍ Picture this, You’ve spent the past few weeks getting your new business venture off the ground. On launch day you’re excited to finally get your product out to the world. Days go by, then weeks, then months and not a single sale. This sound familiar? Working with businesses around the world I see this scenario time and time again ad it does break my heart to see people put so much work into their business and not get any results. But don’t worry. If this is you-you are not alone. I am going to show you some trialled and tested traffic tips to boost your traffic and ultimately your sales. Are you ready? I hope so :) Create valuable, shareable content. This is a method I see time and time again from some of the worlds leading online marketing and all they talk about is how important content it. Ell Content is seen as a I would recommend blogging twice a week MINIMUM in order to keep google and your readers happy. Im building my Blogs page slowly but surely building more traffic. I spend every When you start writing make sure it is no less than 600 words and I say this from experience. It has to be great or people won’t come and above all. Google won’t publish it. Here’s the second way to attract more traffic to your website. Get active on relevant social media profiles. Social media in my mind is one of the greatest ways you can reach your target audience and attract them to your business website. Now social media is important. You can’t just spam your followers all day everyday and expect When i first started my social accounts this is the mistake i made and it was a big Engage with your audience and interact Post valuable content. Don’t always sell, keep it friendly Represent your brand I think that last one is important. You need to remember you are representing your brand and positioning yourself as an expert in your field. Posting a few times a week will definitely boost your traffic over time but this is a Moving to step three to driving more traffic to your website. Email marketing / offering an incentive. ‍ Email marketing is still classed as one of the top ways in which you can drive real, targeted traffic to your web page. Not only are these people who are genuinely interested in your website and the products you sell but they are interested at buying off you at some point in the future. If they give up their email address to you then they are interested and that’s fact. This also gives you the incentive to offer your products and services to them on a monthly basis. This is email marketing. The best methods for email marketing are for the first 3 weeks offer valuable content an for the fourth week of the month offer a promotion. Don’t bombard them with tons of selling you want to offer them valuable info that’s going to position you. Offering an incentive on your website will allow people to give up their email. This may be a fee e book or a discount code to your product. Something adding value to your incentive will allow people to give up their address which pays in the long run. Advertising / PPC marketing This is a big one and if you don’t watch your budget like a hawk then you will end up spending a ton of money. You hear about it all the time about people using Adwords and getting no results whatsoever Now you have probably seen the more cheaper alternative to google What you have got to remember when you are trying to get more traffic through Keep an eye on your budget Target the right keywords Have a dedicated landing page for the target audience Try to collect their email address Keep it interesting With those key points to remember your PPC campaigns should be more successful and you will definitely see an increase in Optimise your site for google search When you think of getting your website ready for google there’s so many categories this falls under. SEO. Now a lot of people hire an SEO company but if you’re like me when you’re first starting but you can’t afford the price these companies charge. This is where you need to know your stuff. Metadata, link building, content marketing, social media . All of these are a factor of google now and if you can dig deep into google you will reap the benefits of huge amounts of targeted traffic absolutely free. I wrote about SEO in my How to start an ecommerce business from scratch post. Think of it this way. The more people you please from your website and the more eyes you get yourself in front of the more google will give you some loving. You can read some of my favorite posts below What is SEO 5 ways small businesses can compete with gains in seo Where do we go from here. Working on a website to continually build traffic is hard. This is why people say its not easy. If you follow some of the methods I have mentioned and research your own it will be a lot easier for you in the long run i promise. With more and more people taking their business online it’s becoming more competitive and crowded so you have to come up with ways of outranking those companies. But the main thing i leave you with is don’t give up. It can be extremely exhausting trying to get a business off the ground with no results and it’s tough, Just keep on growing and learning and everything will fall into place. Until next time SEE THE ORIGINAL BLOG POST HERE
https://medium.com/digital-vault/top-5-ways-to-get-more-traffic-to-your-website-6fa58d3f510b
['Benjamin Jones']
2017-05-25 02:56:48.260000+00:00
['SEO', 'Business', 'Digital Marketing', 'Internet Marketing', 'Entrepreneurship']
Voting Alone Won’t Fix This.
Voting Alone Won’t Fix This. And the sooner we realize that, the better. Photo by Steve Houghton-Burnett on Unsplash It’s difficult to believe that it has only been five months since the coronavirus pandemic began to take hold here in the United States, and frankly it’s still difficult to even begin to wrap my mind around how much things have worsened in that short amount of time. Over 150,000 people are dead, and tens of millions of people are without work in an economic climate where an estimated 42% of jobs lost aren’t expected to return. Even in spite of the risk to their health and frankly their overall safety, hundreds of thousands if not millions of people were motivated to take to the streets in the largest civil rights protest in history after the murder of George Floyd at the hands of Officer Derek Chauvin of the Minneapolis Police Department. The protests were recently re-ignited after unidentifiable federal agents were snatching protestors in Portland, and throwing them in the back of unmarked vans without just cause or due process. And here’s the kicker: we haven’t even begun to reach the tipping point. I’m not sure it could overstated just how much worse things are going to get, and quickly, considering the fact that at the end of July, the extra federal unemployment benefits of $600 (the equivalent of $15 dollars an hour for forty hours a week) expired, leaving the tens of millions of people currently dependent on that money to pay rent, mortgages, and keep themselves and their families fed unable to do so. Along with it, the moratorium on evictions has ended as well, and those with any sort of federal loan are expected to begin paying. It’s not a stretch by any means to suggest that we are on the precipice of a humanitarian crisis the likes of which this country has arguably never seen, and certainly not in its modern history. Meanwhile, the Senate leaves for a month long recess in just a couple of days. In response to current events, on July 31st, former Secretary of Labor Robert Reich tweeted: “25,000,000 people will lose their $600 extra unemployment benefits today, and the Senate skipped out of town. Have you registered to vote yet?” While there’s absolutely no arguing that everyone should register to vote and participate in the electoral process, this notion that Mr. Reich and countless liberals seem to have that voting alone is going to be nearly enough to solve the scope of the crisis at hand is frankly laughable. Imagine telling people on the verge of losing everything — if they haven’t already — to just sit tight for a few months, and vote in November. Our government has actively cultivated a scenario fundamentally designed to enrage and radicalize even the kindest, most politically apathetic people in the country, and the fact that there’s still a certain segment of the democratic base under the impression that just putting Biden in to office is going to solve this will never cease to amaze me. What — might I ask — do these individuals suggest people do in the mean time? What should they do when the Biden administration — if he’s even be elected — inevitably doesn’t even attempt to do a fraction of what will be required to heal this country and repair the lasting damage these past five months have done? After all, he did promise that nothing will fundamentally change. Sure, some of those affected might simply sink into despair, accept things as they are, and our lawmakers won’t even bother giving them a second thought outside the debate stage and their typical pandering. Far more likely, however, is that all of these people begin to realize that our government never had any intention of helping them in the first place. Far more likely is that they find each other whether it be in person or on social media, and they slowly begin to realize they are not alone in feeling as though they have absolutely nothing left to lose. Not only that, but they realize that they are in the conditions they’re in because of our government’s inaction and through absolutely no fault of their own. And we expect them to just sit and wait until November 3rd? It would be nice if voting alone could fix this. It would be nice if Americans could find solace in the fact that this is just a minor bump in the road, we can rely solely on our electoral process to turn things around, and everything will okay again, but unfortunately we just can’t. Why? Because for decades our politicians and the elites that control them have become far too comfortable, and insulated from our rage. They think we’ll just sit back, and continue to take it while they go on recess, and face absolutely no accountability whatsoever. They do not understand the rage. They do not understand the pain, desperation, and despair. Why would they? Why would they when our government solely motivated by power and money has been structurally designed to allow those furthest away from the real concerns of the working class to succeed? Can we really be surprised that Nancy Pelosi, Mitch McConnell, and Chuck Schumer aren’t exactly feeling the dire sense of urgency when they’re all worth tens of millions of dollars? Of course, vote. But in the meantime, make them know our rage. Make them know the pain. Take to the streets, organize with one another, find solidarity in common struggle, and remain relentless until our lawmakers and the ruling class remember that power in numbers can only be suppressed for so long. They have no idea what’s coming, and I genuinely hope — from the bottom of my heart — that our response to their criminal, sociopathic inaction makes them as uncomfortable as it possibly can.
https://medium.com/discourse/voting-alone-wont-fix-this-9f9352527bf3
['Lauren Elizabeth']
2020-08-06 03:21:47.690000+00:00
['Society', 'Election 2020', 'Politics', 'Government', 'Economy']
ABC May be Ready to Cancel ‘Shark Tank’
ABC May be Ready to Cancel ‘Shark Tank’ The veteran reality investment show is on its last legs In 2009, reality investment show Shark Tank premiered on ABC to decent numbers and enormous potential. Spun-off from the British reality TV series ‘Dragons Den’, Shark Tank is a reality show wherein everyday entrepreneurs come to the ‘tank’ and pitch investor ‘sharks’ with their winning ideas. If a shark likes the pitch, they can choose to invest their own money in the idea. The formula hasn’t changed for 11 years, and for the most part, it’s worked. Photo by Victória Kubiaki on Unsplash Ratings Journey The first season sat between 4 and 5 million viewers, with one episode hitting 10 million. Season 2 saw similar ratings as the first season, which didn’t excite ABC in the least. They wanted a lot more viewership, and at the time weren’t optimistic about letting the show continue. According to Barbara Corcoran’s podcast ‘Business Unusual,’ the entire show could have been cancelled after the second season if it weren’t for one critical decision. “Outspoken owner of the Dallas Mavericks” Mark Cuban was brought on as a regular cast-member, a decision that is credited for the 30%+ boost in the ratings that carried on for the next couple of seasons. Season 6 seems to be the show’s peak, with regular viewership dancing around 7 million people. Starting from season 7, viewership declined until reaching the numbers they’re experiencing today in season 11. The season 11 premiere episode was seen by 3.8 million people, with viewership dropping as low as 2.75 for its 11th episode. With ratings in constant decline, the real nail in the coffin for the veteran show was ABC’s announcement last week that they’d be moving Shark Tank to the Friday night death slot. So what went so wrong for Shark Tank? And is there any hope of revival for the beloved reality series? Photo by Gerald Schömbs on Unsplash The Shark Tank Model Shark Tank has been around so long because like all reality TV shows, it’s inexpensive, it’s exciting, and it has developed a cult-like following. Viewers love tuning in and watching dreams made and destroyed in front of their eyes. There’s something universally appealing about watching very rich people dealing out advice to struggling entrepreneurs looking for someone to believe in them. Although I’m much more cynical than the average person; I much prefer the moments when the shark investors come down on entrepreneurs who came unprepared and delivered a sub-par pitch. I watch reality TV for the “wine throwing” moments, and if you’re looking for those, Shark Tank doesn’t disappoint. The best of these moments are the times when the sharks fight each other. After years of carefully cultivating a unique collection of highly egotistical millionaires and billionaires, ABC has ensured that legendary fights will break out at least three times every season; always with one episode seeing at least two cast members leave the set in a dramatic exit. Although dramatic and addictive, the show isn’t going the way of the Kardashians. That is to say that they aren’t gaining popularity with time, they’re losing it. Photo by Jeanne Rouillard on Unsplash Perfect Timing The show aired its first season right after the financial crisis, timing that couldn’t have been more perfect for the American public. The country was on its knees after the 2008 recession, and desperately needed something positive to watch. They needed a show where real-life business people gained the success they’d spent years searching for. They wanted to see someone have a win, and that’s what Shark Tank provided. Over the first few seasons, the reality show provided many tear-jerking moments with their entrepreneur guests who had lost everything in the recession. The camera would slowly pan into a tight close-up on the entrepreneur’s crying face while the violins soured over their sob-spoken stories of bankruptcy and ruin. B camera would flank the sharks to provide follow-up shots of the sharks crying on queue. These moments guaranteed emotional responses in living rooms all around America. These moments made for compelling television, and for a while, it kept the show going. But then years passed, and people stopped being able to rely on the recession as a reason for failure. In the recent bull economy, anyone that failed had only themselves to blame, leaving no time for tears. The show took a pivot away from sad drama into conflict drama. Entrepreneurs couldn’t blame the economy, so they took wild swings at parents and Chinese knock-off competition. The sharks would react with speeches of ‘pulling yourself up by your bootstraps’ and personal stories of victory over adversity. It seemed to work fine, but nothing could recreate the magic of the tears. Photo by Jonathan Chng on Unsplash Hobbling to the Finish Line Nowadays the show seems to be on its last legs. The series has taken another pivot, this time into memory lane. Seasons 10 and 11 are heavily focused on the triumphs of the past, the amount of money invested over the years, and the number of entrepreneurs who have the show to thank for their successes. It seems that like a retired singer who’s released a ‘best of’ album before checking into rehab, so has Shark Tank leaned on their laurels while starting to pack up the studio. I think the only chance the show has left of making a come-back at this point is by leaning on the coronavirus-fuelled stock market crash of early 2020. There’s a chance that business people who’ve lost everything because of the impact of the virus could provide the lifeblood the reality series needs. But there’s no guarantee it will work, and it may take years before we know the full damage the virus has caused to small business owners. It’s the End Shark Tank lives in the death slot now, where TV shows go to die (the Friday night slot). So with that, I say goodbye Shark Tank, I loved you while you lasted. While you never gained the viewership of ABC juggernaut ‘Modern Family,’ you made just as many seasons, and that’s worth celebrating. We will remember you by re-watching old episodes and reliving all the good moments, the boring math talk, and definitely the fights. You may not have made Kardashian-level money, but at least people respect you; plus at the very least your cast-members were actually self-made millionaires and billionaires. Thank junk is priceless, and none of the Kardashians or Jenners can ever say that… … Are you listening, Kylie?
https://medium.com/money-clip/abc-may-be-ready-to-cancel-shark-tank-10a6701fef13
['Jordan Fraser']
2020-03-04 17:01:01.509000+00:00
['Business', 'Money', 'TV Series', 'TV Shows', 'Entrepreneurship']
A Guide to Meaningful Political Discussions
A Guide to Meaningful Political Discussions Our discussions miss their target. Here is a guide for real results. Photo by Sushil Nash on Unsplash The political culture in the West is currently very charged. Debates are conducted in an emotional, unrelated, and offensive manner, and in many discussions, one has the feeling that the only goal is about exposing the other person. But sometimes the offensive behavior also leads to you opening your cover too wide — and making a fool of yourself. You can see it everywhere. There are thousands of results on YouTube if you search for compilations in which people are owned, destroyed, and triggered. No matter what political spectrum they come from. But it’s pretty sad to watch The many videos, of which some show entire discussions, but mostly excerpts, are only focused on presenting individual missteps by the discussants. Highly complex topics are broken down to the two hardened fronts, and often it is no longer about the concrete contents — but about individual arguments, and how they were made, and how they get ridiculed. For the commentary section among these videos, the one that has exposed his political enemy has won — and many participants in the discussion expose themselves. In this article, I discuss how we can have discussions that can really make a difference without exposing ourselves or others. Being right does not necessarily mean winning. What I understand as a won discussion — to convey a new point of view in an understandable way to my opponent. Anything else would be pointless. If you are only out to expose others, this often only hardens the fronts further. Admissions are made even less; the anger of the other side grows. Even if you objectively have better arguments, it does not mean that they will be accepted as such. Especially not if you treat yours with disrespect — why should he be more willing to accept your point? Neglect Your Political Camp Most of us can certainly assign ourselves to a larger political camp. Be it a philosophical school like the Austrian school of economics, a party like the democrats or republicans, a political/social system like the monarchy, socialism, or grassroots democracy. Perhaps quite simply, in which direction they fit in politically — Conservative, Progressive, Liberal, and so on. So most of us can assign ourselves to one or perhaps several camps, but if we are honest, it is usually not a clear decision. Even within the most specific political camps, such as parties, there are vast differences of opinion. We are generally prejudiced about such camps. And whenever someone reveals himself to be a follower of a certain camp, we like to associate this with all the positions we reject ourselves. Your political camp just doesn’t matter. Only the concrete political position you want to represent in a discussion does. This way, you can avoid prejudices, deviations from the topic, further rejection, and incomprehension. Finally, your political personality is far too complex to be expressed meaningfully by belonging to a political camp. Questions Instead of Statements One of the most common reasons why people embarrass themselves in political discussions is that they immediately make their position clear — Partly in a much too general way. I am against abortion; I am for drug legalization, taxes are robbery, white privilege does not exist — good examples of general and offensive statements. Even though they are very clear, these statements make us very vulnerable. If we say, for example, that we are against abortion in general, our opponent is now back in action. And he will try to find the scenario that he thinks we should be in favor of abortion, at least in this one scenario. Even if he finds one and we still disagree with it, he may attack us for our alleged motives behind it, implying ignorance. Unfortunately for us, our opponent may be quite familiar with the subject of rape and subsequent pregnancies. Now he can attack us from his comfort zone. Even if we row back after our generalized statement, this is anything but good. Although it actually shows a lot of courage, our opponent, maybe even the audience, will from now on regard us as unbelievable. If you are sitting across from someone who has just rowed back, you should not attack them for it. Instead, it would be best if you praised your opponent for it, the more you increase your sympathies, the less likely it is that he will attack you clumsily. It also makes it easier for him to confess — and that’s what you want him to do. My recommendation is, therefore, to always ask questions, this way, you can get to know the opponent’s position without having a typical “I never said that” afterward. Besides, you don’t even have to row back because you didn’t make a clear statement. Instead, you’re only interested in the motives of your counterpart. The last advantage is that you can give your opponent the space to explain his opinion. This way, he is less likely to put himself in an emotional attack or defensive posture, but rather can move forward. How this can be used to take action You still have not revealed your point of view? Very good. But you would like to intervene slowly and make an argument? No problem. The trick is to express your point of view through something or someone else. Instead of saying “no, I see it this way and that way,” you can go a little further: “Interesting point, but there is also politician x who once said y. What do you think about that?” This way, you do not reveal your position, but you can actively influence the opinion of the other. Positive side effect: Especially if you take up the opinion and justification of a famous idea, party, study, or personality, it is not unlikely that your political opponent has already had to deal with it. So there will be fewer misunderstandings, he might already have an answer, and depending on the source, there may be facts now in the discussion. Draw Attention to Common Interests Nowadays, when two different parties are discussing with each other, it seems like different worlds colliding & causing a huge bang. The funny thing is that most political camps have the same fundamental interests. Only the approach to achieving these goals is different. Example: People who tend to be economically libertarian are often of the opinion that tax cuts will increase prosperity. People who tend to be left-wing oriented are often of the opinion that the tax on the rich increases wealth. Neither party has any interest whatsoever in anyone being bitterly poor. Only their approaches to ensuring general prosperity are different. As I said before, many discussions are very charged because they contain prejudices and ideological beliefs about our opponent. Interestingly enough, these reservations can usually be overcome relatively quickly. Do not interrupt anyone. It’s really not difficult. You only speak when you really have the feeling that it’s your turn, or when you are told. No matter in what context of communication, interrupting someone is always inappropriate. It will only lead to a more tense situation, and your counterpart may even come out of the flow of speech. You want others to hear you out, so hear everyone else out. After all, a better point of view is supposed to win afterward and not the one that has best kept the other person from arguing. If someone interrupts you, you should address this immediately. Interjections with single words also count as interruptions.
https://medium.com/discourse/a-guide-to-meaningful-political-discussions-2231e95e4830
['Louis Petrik']
2020-09-03 18:22:37.147000+00:00
['Language', 'Election 2020', 'Politics', 'Society', 'Culture']
How to Be an Active Listener
How to Be an Active Listener Communication begins with listening — properly. I hate it when people don’t listen. Sure, they hear what you’re saying. They register the noise that comes out of your mouth when you are speaking. But they don’t really hear you. They may be able to respond. But their effectivity is hampered by the fact that they weren’t listening intently in the first place. One of my most memorable first dates with a guy I thought was cute was a trainwreck. Nothing super dramatic — he just seemed to have the brain cells of a pigeon. It was such a shame because he was a hotty. We were having dinner after going to watch a film. That had been super awkward. The guy kept trying to make a move, even though we had said no more than three words to each other in person. As we were eating dinner, he had his phone out on the table and was super unsettled. “So what’re your thoughts on the issue?” I repeated. He looked up, half startled. “What?” “You know. On institutional racism in the education system.” I persisted. Granted, it was a heavy topic for a first date, but I’d just spent a few minutes talking about my passions after being prompted. “It’s bad I guess.” he shrugged. I raised my eyebrow and continued eating in silence. When we finished, he offered to walk me home — to which he repeatedly attempted to make a move. I was so unimpressed, I just patted his shoulder and insisted on making the journey alone. “I’ll call you.” I did text him later, but he had swiftly been relegated to an acquaintance. It was a shame too — he ticked every box I had when it came to physical attraction. I just couldn’t stand romantically engaging with someone with the personality of wallpaper. Perhaps it’s happened to you in the same kind of situation. Or maybe you have a friend who just doesn’t seem to hear anything you say when you’re talking. Or a parent, or colleague. Maybe the story hit home for you because the guy is you. Wherever it hits home for you, I think one thing is pretty clear. We could all do with a bit more active listening when it comes to our relationships. Make Eye Contact The first law of active listening is eye contact. Now, this doesn’t mean that at all moments in the presence of someone else, you must stare into the very depths of their soul. That’s creepy. Don’t do that. What it does mean is ensuring that you focus your attention wholly on the individual. Attention is important. How many dates (aside from mine) were killed by a lack of eye contact? My general rule of thumb is to avoid distractions, such as phones, when I am in the presence of another person. I tend to be more stringent with this when trying to build intimacy. Eye contact is important because it creates a level of intimacy that encourages responsiveness in the person you are listening to. It communicates respect, genuine interest, understanding, and appreciation. It’s simple but underrated. Maintain eye contact. The Devil in the Details Details matter. Listening out for the detail in what the other person says is crucial. For those of you with absolutely no game, listen up. You don’t need a corny pickup line or to overdo it on the sexual tension. All you need to do is pay attention to the details. If you are getting to know someone for the first time, make an active effort to ask questions that require details. Ask them what their favorite food is, why, and what flavors they think go well together. Ask them about an embarrassing childhood moment, and how they felt specifically at the time. Ask them their opinion on a controversial topic, and figure out precisely what experiences shaped that worldview. It’s not that hard — it just requires active effort. It’s all about teasing out the details you can use later on. Repetition is Your Best Friend Repeating elements of what was said will help you by a long shot. Not only does it clarify whether what you heard was what the other person said, but it sets you up to answer as accurately and as true to yourself as possible. It also shows that you were actively listening. It’s kind of like an exam. You know when you receive a question, and your teacher always advises you to use elements of that question to answer it? It makes sure that you focus on the question and answer accordingly. It demonstrates that you know exactly what the examiners are looking for. Apply this principle to listening. When people are speaking, and sharing their stories or opinion with you, looking for key phrases that you can repeat shows that you understand their viewpoint. Don’t repeat everything, of course. That would be weird. Balance, my dearest. Easy on the Projection Stop projecting. There, I said it. Somebody had to. As human beings, the way we engage with the world is subject to our experiences, our feelings, and our understanding of the world. Whilst giving advice or sharing your opinion is one thing, projecting these things onto another person is another. The first three tips will help you avoid this. Oftentimes, projection happens when a conflict arises between your unconscious feelings and conscious beliefs. To deal with these issues, you transfer ownership of your feelings and thoughts onto an external source. For example, internal anger is an emotion often projected onto someone else. If you are still dealing with unresolved anger at another situation, chances are you’re likely to not process what someone else is objectively saying to you. Do you ever get into an argument with a loved one over something stupid? Chances are, one of you were projecting and not listening. That’s why you must check your emotional state, for the benefit of the other person.
https://medium.com/live-your-life-on-purpose/how-to-be-an-active-listener-9e5822103c6b
['Renée Kapuku']
2020-07-16 19:01:01.142000+00:00
['Leadership', 'Self', 'Advice', 'Creativity', 'Personal Growth']
A Better Approach to Dark Mode on Your Website
A Better Approach to Dark Mode on Your Website Learn how to implement Dark Mode today Image by the author Dark Mode seems to be here to stay. Initially, it was mainly used by developers in their code editors, but now it’s all over the place, from mobile, desktops, and now even the web. Supporting Dark Mode is more and more turning from a nice to have to a must-have. Of course, it depends on your niche, but if you are here, chances are, you want to implement Dark Mode on your site or for your clients. When it comes to dark mode for websites, there are multiple approaches on the web, and I’d like to talk with you about two different approaches — with one being the easiest to implement, and the second being my preferred option, and the option that I actually used to implement Dark Mode on livecodestream.dev. Let’s get started with the first option.
https://medium.com/better-programming/a-better-approach-to-dark-mode-on-your-website-dadbe8c55b40
['Juan Cruz Martinez']
2020-08-10 12:01:02.222000+00:00
['Design', 'JavaScript', 'Programming', 'Web Development', 'CSS']
AI News Roundup — June 2019. by Gabriella Runnels and Macon McLean
AI News Roundup — June 2019 The Opex AI Roundup provides you with our take on the coolest and most interesting Artificial Intelligence (AI) news and developments each month. Stay tuned and feel free to comment with any stories you think we missed! _________________________________________________________________ This A.I. Is Starting on the Right Foot Photo by Alex Blăjan on Unsplash This month, researchers from the University of Michigan and the Shirley Ryan AbilityLab presented an open-source bionic leg that uses sensor data and artificial intelligence to anticipate and respond to users’ movements. This innovation represents a significant advance in the field, as bionic legs are a notorious challenge in prosthetic limb design. Although its underlying technology is open-source, the total price to build one of these legs clocks in at around $28,500 — not exactly the budget of your typical DIY project. But by making this technology open-source, researchers are hoping to tap into the knowledge of the general public to further improve this life-changing technology. Now That’s One Hot Model Image by Free-Photos from Pixabay According to a recent study, building and training certain AI models can produce “more than 626,000 pounds of carbon dioxide equivalent” — roughly the carbon footprint of the average American over seventeen years. In fact, the type of model that powers Siri and other AI voice assistants may be one of the worst culprits. This set of algorithms, part of a field called “Natural Language Processing” (NLP), has made impressive strides in the last few years, but evidently at a cost. Training these complex models on huge datasets requires enormous computing power, and therefore enormous amounts of energy. If the robot uprising isn’t the downfall of humanity, AI might still wipe us out from the greenhouse gases alone. Reduce, Reuse… Reinforcement Learning? Photo by Gary Chan on Unsplash While the sheer amount of energy consumed by NLP models and other machine learning algorithms is alarming, we can take some comfort in the fact that not all AI is environmentally destructive. In fact, the Colorado-based startup AMP Robotics and the Norwegian company TOMRA are both developing cutting-edge AI technologies to improve the efficiency and accuracy of the recycling process. For example, AMP’s robotic technology can differentiate between materials that recycle differently, but that humans tend to lump together, while maintaining a very high processing speed. To teach the models how to correctly identify and sort recycled materials, a convolutional neural network is trained on millions of images. (Let’s hope the carbon emissions from model training don’t offset the recycling benefits!) Deepfake Detective Photo by Kevin Ku on Unsplash If the idea of deepfakes scares you (and it should), you’ve probably wondered what can be done to protect us against them. Good news — researchers have identified what they call a “soft-biometric signature,” which serves as an ersatz watermark to verify the authenticity of a video. Using generative adversarial networks, data scientists have found ways to detect an individual’s set of unique facial movements that serves as their own personal speech signature. Misinformation and false statements are a major concern, especially for world leaders; our ability to thwart bad actors will have a significant impact on our future, in ways both obvious and insidious. AI at the Speed of Light Image by Gerd Altmann from Pixabay Bill Gates has just given money to a new startup that designs high-performance computer chips that use light instead of electrons to perform computations. Luminous, the company in question, has made waves recently with news of their prototype chip that ditches energy-intensive electrons for comparatively efficient photons. Light waves are bent within the chip via “waveguides,” which move data more quickly than traditional processes. Given a new Moore’s Law-esque finding that says “the amount of computing power needed to train the largest AI models is doubling every three and a half months,” the development of this new chip is a welcome sign of progress. Speed won’t be sacrificed either; in fact, these new processors will likely be far faster than their electrical predecessors. Even better, with widespread adoption, these chips would significantly reduce AI’s environmental impact. That’s it for this month! In case you missed it, here’s last month’s roundup with even more cool AI news. Check back in July for more of the most interesting developments in the AI community (from our point of view, of course). _________________________________________________________________ If you liked this blog post, check out more of our work, follow us on social media (Twitter, LinkedIn, and Facebook), or join us for our free monthly Academy webinars.
https://medium.com/opex-analytics/ai-news-roundup-june-2019-73600360f631
['Opex Analytics']
2019-07-30 18:40:10.458000+00:00
['Artificial Intelligence', 'Roundup', 'News', 'Data', 'Data Science']
Building strong collaborative development teams
Early learnings In my personal life, I am a self-admitted me first person. Of course, this has evolved as I have gotten married and had kids so not as extreme as when I was younger. However, in my professional career, I have embraced the concept of “we first” approaches to collaboration and motivating colleagues to achieve team success that could not have been accomplished in a “me first” environment. The origin of this transformation started in the late 1990’s when one of my managers at a startup, educated me on the power of the term “we” and how to remove the words “I” or “me” from work conversations. We talked about how this could be applied to discussing product deliverables, demos and even the dreaded discussions around code defects and website crashes. This very brief but insightful conversation has become a foundational component to how I rate great teams as I have worked a companies ranging from about 5 employees to today at over 400,000 employees. Approach to team building One of my passions is cutting edge technology. IBM is famous for creating small incubator teams that eventually evolve into large globally distributed teams. The missions of these teams is to identify a market opportunity and establish themselves as market segment leaders. The goal for development teams is to deliver amazing technology with Subject Matter Experts (SMEs) that become industry thought leaders in this technology. When we build teams, there is a strong focus on having well rounded developers. In our missions, we desire to form teams of developers that are all willing to present at conferences, willing to learn new technologies and also willing to work collaboratively with other teams (both local and remote). The idea is that these characteristics are traits that we see in strong teams and also allows us to give our team members the foundations to expand their skills. We have often seen teams that are very focused on a single aspect of a product and not very aware of the overall product capabilities. In our model, we have a variety of developers with different experience levels that have demoed the end to end product to a variety of stakeholders including executives. The idea is that if we can demonstrate consumability and ease of use our capabilities by having all of our developers understand the overall mission, it becomes easier to show this value to our customers. As an added bonus, developers can empathize and understand pain points in user flows and can collaborate together to reduce friction between teams. Pivoting mission and resources Being part of an incubator is not always spectacular. If one follows the Lean Startup methodology, you will come to love and embrace the concept of pivoting. The teams I have been part of have definitely pivoted more than their fair share. In this “we first” environment, our teams focus on how we can quickly adjust to the market and move forward. In this model, team members that have embraced new technologies and are willing to collaborate always come out on top. As our teams typically start small and grow, we often rebalance teams and formalize leadership roles both within an existing component as well as establish new components. We have even quickly shifted to a completely different incubator with a target of having a demoable proof of concept in two weeks. Technology choices When it comes to technology choices, we have seen a bottoms up approach to technology gives the development team a much stronger sense of ownership and investment in any mission. In this model, we have implemented more of a decentralized adoption model where the developers collaborate across the given teams to determine best of breed technologies that can accelerate our development efforts the most. What we have found is that teams evaluate tools across a variety of criteria ranging from ease of use, code quality, and external adoption. By removing the friction that often comes when dictating solutions to teams (not uncommon for companies to dictate tech choices and this typically fails), our developers instead spend time educating teams on the benefits of these solutions and become promoters and SMEs of these tools throughout the larger organizations often acting as change agents for the company. Agile approach This past summer, our team embraced the notion of self organizing teams for the duration of a two week sprint. Typically our teams are segmented by components and the sprint deliverables within that component. This works great but we felt that we could embrace the “we first” model even more if we empowered our teams to identify the work items from the backlogs and size their teams appropriately. In the end, we found that developers gravitated to deliverables that were outside of their comfort area and in working with other team members with more experience in a given area, developers obtained completely different perspectives on the deliverable and we got amazing results. If that was not enough to excite you, we also took a different approach to sprint demos. In the past, we had each component team demo their deliverables. What we found is that stakeholders often struggled with the end to end picture of what we were delivering and the value. In the case of internal deliverables, this was even more evident. In our new experiment, we expanded our model around everyone being able to demo to having volunteers each sprint demo the scenario we were targeting for the sprint. In this model, we only demoed the scenario and focused on the value of what we are looking to achieve in this deliverable. In having a singular vision around a “we first”, developers understood that while every aspect of the sprint work was important, there are aspects that are better suited for external audiences. To emphasize this point, we allocated time at the end of the sprint to demonstrate internally all of the work that was done for a given sprint for a smaller audience that extended beyond the development team but not as large as the stakeholder demos. Conclusion In this article, we talked about one organization’s approach to building strong technical teams. The teams were well rounded teams with strong personalities and skills but all had a common vision on how to deliver first class solutions in the cloud. If you liked this article, please read our article “Transitioning to a startup mentality” detailing how we evolved a very large enterprise development team to act like a startup.
https://medium.com/devops-for-the-cloud/building-strong-collaborative-development-teams-65b45f6e95c1
['Todd Kaplinger']
2017-01-20 11:05:14.817000+00:00
['Lean Startup', 'Software Development', 'Agile', 'DevOps', 'Cloud Computing']
Why Was the 20th Century So Bloody?
It ought to send a shiver down our spines that one of the world’s most murderous centuries lurks so closely in the rear-view mirror. How to make sense of it all? How, even, to feel the pressing weight of 120-some million dead? Perhaps because of these eye-watering quantities, such a figure fails to imprint us with the magnitude of such a loss, existing instead as a floating, abstract statistic. Along with two world wars — the first devastating spectacles of international conflict of its kind to go down in history — and a multitude of bloody ethnic feuds — the century’s dictatorial regimes fanned the flames of mass death, managing to double the body count. Of those gunned down on smoky battlefields, just as many were starved to death, worked to death, or otherwise dragged down the road to death by countless other gruesomely creative means. Decisively removed from this century by two decades, we’re apt to regard it as a bad dream of the past, a developmental period that — thank God, we’re through with. But we cannot really wash our hands clean from such a memory. This is because history is not something that happens to us. Instead, it is something we actively fashion into existence; the grievous events that constituted the 20th century were both perpetrated by and perpetuated by humans. And the irony of it all: Who could have foresaw the back-flip into the abyss of mass killing? Surely those basking in the glow of the Enlightenment as it passed over the Western world would have pointed towards a promising forecast for humanity on the horizon. Out of the growth of intellectual reason famously blossomed a new faith in the merits of democracy in addition to a well-rooted conviction in the sacredness of the individual. The Enlightenment was a celebration of the convergence between the scientific and the humane. But contrary to the optimism of this age, the future that lay ahead would not gamely solider on with this (admittedly somewhat delicate) experiment, but would start running amok with things, wreaking havoc — even reversing course, one could say. How else could you frame the chilling disposability of human life during the 20th century? And the sheer scale of it all, for that matter? The Enlightenment wasn’t able to transcend the muck. Instead, in a cruel twist of fate, it provided some of the actual fodder that made the 20th century’s bloody achievements possible. Marxism Despite failing to live long enough to cross into the 20th century himself, the theories of class struggle that Karl Marx so militantly belabored during his existence gained a mighty foothold posthumously, first among Russian radicals, and later mutating and spreading to their Asian neighbors to the south. The politics of Marxism diverged from those espoused by the genteel thinkers of the Enlightenment. Instead, Marxism called for revolution as opposed to peace, strength as opposed to cooperation, cold-blooded will as opposed to diplomacy. Marxism was built on the class struggle and predicated on the notion of righteous overthrow. Claiming philosophical loyalty to the notion of the ends justify the means, the soup of Marxist thought had several other awfully revealing ingredients. One of which was a requirement for perpetual conflict, another of which was a spirit of absolutism — manifested in a dogmatic vision of what needed to be done just as much as the moral certainty with which these objectives were painted. The way the Marxist narrative was supposed to function, the proletariat would stage an angry uprising against the bourgeois fueled by righteous revenge. But this isn’t necessarily how things played out: An organic rebellion of Marxist proportions didn’t materialize, in part because it wasn’t simply a means to harness resentment bottom-up but instead, was a top-down theory. Marxism came wrapped up in all this theoretical gravitas, in the clunky language of socioeconomic analysis and dialectical materialism. Lenin, for example, Russia’s premier communist agitator, churned out volume upon volume of Marxist rhetoric over his lifetime. What resulted from Lenin’s pen (and many others) was a long, dark corridor of Russian history filled with bodies. Intellectual thought had given rise to a theory so destructive that the human powers of rationality and reason had succeeded in blotting out human minds themselves. Again — the irony, the excess. What’s significant about a lot of the 20th century was the amount of death associated with ideas, with theories. Perhaps humans had grown out of dueling each other, utilizing the sword to take care of some petty crime and exact some sweet revenge, but if it was thought that we were removed from the primitivity of senseless death, we were horribly mistaken. In reality, the 20th century mastered the art of efficient mass killing. Now, war and death resulting from territorial squabbles or religious ideology had frequently blighted the historical calendar. But here Marxism lay claim to millions dead in the service of what was really a highly abstract idea — communist utopia. Its scale blotted out any weird misgivings about the human sacrifice it entailed. Its intellectual texts (that did so much to advance the ideology) could be conceived as a dangerous hyperextension of rationality. All this theory, all this soaking in abstract philosophy proved disastrous. The Enlightenment advanced civilization and celebrated the glorious contributions of the human mind, but here was blistering evidence of its runaway excesses. Rationality, when in the service of ideology, could be deadly; it could supersede a moral loyalty to the considerations of humanity. In this case, Marxism could trample over ethics with its neat-and-tidy utopian vision. Marxism stands as historical evidence of the chilling toxicity of theory. Who could have imagined how enduringly it sustained the death toll? When humans rationalize we aren’t always making something good. Instead, we’re constructing something that works. This distinction marks the capacity of the human mind as something that needs to be consistently governed by ethics. But these the Marxists gleefully cast off, heeding the siren call of power, efficiency, and hyperextended reason. All of these creatures of destruction descendant of Marxist sentiment are imprinted on the 20th century’s worst crimes. Consider Stalin’s purges and appallingly bungled collectivization efforts, or Mao’s mass starvation, or even Hitler’s industrialized process of human extermination. Each dictator had a clear picture of their ideal state. Each fell slave to the idea of utopia in addition to a toxic theory. And each rationalized their process by means of these utopian visions and beloved theories. In a less consciously evil category, the world wars were equally efficiently destructive. One of the elements that made the 20th century one of the deadliest was the not only the sheer industrialization of death, but the rationalization of it. Nietzscheism Another man who never really made it into the 1900s — in this case, just barely crossing the threshold of it — but whose ghosts lingered on was Friedrich Nietzsche, the German philosopher who scoffed at the notion of morals, spearheaded the glorification of strength above all else, and famously declared that “God is dead. . .and we have killed him”, left a philosophic legacy that — who could have known — enchanted 20th century minds. Hitler, for one, tore a page from Nietzsche’s musings and modeled his Nazi project on its precepts. Power and instinct gained dominance over virtue and bridled reason. What mattered was a German nation with strength and vigorous obedience, unbound from the sticky web that was a moral conscience. Energy mattered, not wisdom. Results mattered, not means. Hardness and certainty were idolized, and sentimental thoughts and ethical unease were scolded as the mark of a weak person. And weak persons were anathema in the 20th century. Nietzsche’s signature cynicism was behind his conviction that morals would gradually enter extinction. In his writings he invoked the Darwinian simplicity of the survival of the fittest — and had no problem condoning the use of violence to achieve certain ends. He patterned human nature on the animal kingdom, an interesting “regression” for someone of the 19th century. Nietzsche turned philosophy back to earth, unveiling uncomfortable truths of psychological import in the process. He was smart but cynical, and dismayingly, served as chief inspiration for some of the century’s worst developments (which, it should be said, was not his intention). The Nietzschean amoral stance accompanied by the dazzling ideal of supreme strength translated to people becoming very good at doing very bad things. The horrible efficiency of war and genocide in this era and the smoldering ruin it caused stands as a premier example. Again, we see the excesses of both industrialization and rationalization at work here. Could it have been avoided? Jonathan Glover, author of the book Humanity: A Moral History of the Twentieth Century admits to seeing the horrors of this era as somewhat inevitable, carefully putting forth the following: “But the French Revolutionary guillotine and the republican baptisms — and the interest in the possibilities of gassing — all show how naturally inhumanity combines with technology. No doubt the facts of twentieth-century history would have been different if the assassination of the Archduke had not taken place, but inhumanity would still have been combined with modern technology. It is hard to see that there was much chance to escape some variant of the bloody twentieth century we know.” His view is one that takes into account how just as humans are creative (innovating and discovering) we are still just as cruel. For example, developing better weapons made us better at a thing but importantly, not better people. And it can be very hard for creativity to resist being co-opted by cruelty. The readiness of humans towards cruelty, revenge, or persuasion to succumbing to these dark forces fails to vanish as the centuries tick by. And the more we age as civilizations, the greater a threat we actually become to ourselves. The 20th century turned out to be a devastatingly flammable brew of philosophy, politics, and technical advancements. In that fated century, experiments of political utopia and weapons of mass destruction came to pass. What a strange, contradictory mixture of idealism and the death wish. What a nod to the shiver-inducing drive we humans have to push the envelope. The heady industrialization of 20th century wars and political projects constituted technical progress while callous rationalizations were heralded as ever-more-sophisticated philosophy. But progress, we should note, is not always good. We can encounter excess; we can trip ourselves up. Humans can become very good at doing very bad things. And the blood-spattered 20th century is all too willing to serve as a reminder of just the full strength of these very capacities.
https://medium.com/lessons-from-history/why-was-the-20th-century-so-bloody-1bd8b86f5ad6
['Lauren Reiff']
2020-03-12 03:59:06.825000+00:00
['Politics', 'History', 'Society', 'World', 'Philosophy']
I do care about my City Hall’s Data.
I am a Computer Science undergraduate student from Recife, Pernambuco, Brazil. And Recife is a tech hub . I can’t tell my whole story in tech here but just to sum up: Studied in a technichal school of Game Development (which also had lectures on Design and Game Design). Passed the entrance exam for University (ENEM) Entered on the biggest IT company of town (C.E.S.A.R.) Struggled with the math behind the course, left the job and learned about Data Science field. Starts a research on Educational Data Analysis. Then…I’ve found about http://www.dados.recife.pe.gov.br which is an open data initiative from my City. And there is a dataset about the public health care system called SAMU. The dataset is fully in Portuguese but you can download there, it’s about all the call requests made in 2015. And I decided to analyze all this data to improve my skills, I hope you enjoy a bit of the descriptive analysis I’ve done and some thoughts I’ve raised and tried to answer. What is it? The data I’ve analyzed (still am) are about the emergency calls of 2015 made to SAMU (for those who don’t know, it’s the public health emergency system in Brazil). There were 70.011 calls! You can find this data on this link and it’s divided between 5 datasets “ambulances”, “neighborhood”, “districts” , “expertise”, “removes” and “calls”. Our variable of focus is desistência (“give up on call”), filled a posteriori, it’s a variable which I’ll be able to use Machine Learning algorithm to predict it’s values, the problem will binary, so it’s values are “give up” or “not give up”. Exploratory Analysis Basically, a exploratory analysis is used to understand more about how the variables relate with each other, which insights I can get. When I split my dataset by sex, I can already plot a simple bar graph with “some” result: The difference in desistência between man and women is insignificant, the first one gave up 14,5% of the times and the second, 13,3%. From the variable Sex, I decided to mess up with the age and started playing with the spans, such as, how the olders behave? And I thought how to do a correct segmentation. It was cool to find that there existis a law defition for elder in Brazil(the following is in Portuguese, sorry): No Brasil, considera-se idoso quem tiver atingido os 60 ou mais anos de idade, homem ou mulher, nacional ou estrangeiro, urbano ou rural, trabalhador da iniciativa privada, de serviço público, livre ou recluso, exercendo atividades ou aposentado, incluindo o pensionista e qualquer que seja a sua condição social (Martinez, 2005, pg. 20). Source: http://boilerdo.blogspot.com.br/2013/04/quem-pode-ser-considerado-idoso-nos.html Another source: http://www.planalto.gov.br/ccivil_03/leis/l8842.htm Data Transformation (And keep up with the analysis) There exists a column called “solicitacao_data”(call date), then I though about which questions I could answer based on the time, here goes the first one: Which is the worse day and time to call SAMU? As it is a date, the first thing I had to do was to transform, was to create a column that given a date, I’d return the week day. The distribution below is there result: And how did I answer the question from this section? The following code #easy #butno I calculated the amount of minutes between the call and the arrival at the hospital, even though I still think I have to look for outliers( my bad, I haven’t done it yet): teste para calcular o tempo de espera -> chegada ao hospital tempo_1 = solicitacoes_2015[“data_acionamento”][0] tempo_2 = solicitacoes_2015[“data_chegada”][0] fmt = “%Y-%m-%d %H:%M” #REGEX tempo_1_parsed = datetime.datetime.strptime(tempo_1,fmt) tempo_2_parsed = datetime.datetime.strptime(tempo_2,fmt) tp1_ts = time.mktime(tempo_1_parsed.timetuple()) tp2_ts = time.mktime(tempo_2_parsed.timetuple()) print(tempo_1) print(tempo_2) #print(tp2_ts) #print(tp1_ts) print (str(int(tp2_ts-tp1_ts) / 60) + “ minutos “) index = 0 tempo_de_transporte = [] for data_acionamento in solicitacoes_2015[“data_acionamento”]: if type(data_acionamento) is not float: #nan is considered to be a float type data_acionamento_parser = datetime.datetime.strptime(data_acionamento, fmt) data_acionamento_ts = time.mktime(data_acionamento_parser.timetuple()) if type(solicitacoes_2015[“data_chegada”].iloc[index]) is not float: data_chegada_parser = datetime.datetime.strptime(solicitacoes_2015[“data_chegada”].iloc[index], fmt) data_chegada_ts = time.mktime(data_chegada_parser.timetuple()) #print(data_acionamento_parser) #print(data_chegada_parser) tempo_de_transporte_instancia = int(data_chegada_ts — data_acionamento_ts)/ 60 tempo_de_transporte.append(tempo_de_transporte_instancia) print(str(tempo_de_transporte_instancia) + “ minutos”) else: tempo_de_transporte.append(9999) #Significando Missing else: tempo_de_transporte.append(9999) #Significando Missing index += 1 To be able to do something about these data, I had to use the function GroupBy, and how the name says, it create groups from the distincts values of a column. #agrupando por dia da semana, quais deles possui a maior quantidade de desistências? solicitacoes_2015_por_dia = solicitacoes_2015.groupby(“solicitacao_diadasemana”) for dia, solicitacoes in solicitacoes_2015_por_dia: print(“No dia “ + dia +” ”) qtd_desistencias = solicitacoes[solicitacoes[“motivodescarte_descricao”] == “DESISTENCIA DA SOLICITAÇÃO”].shape[0] print(str(qtd_desistencias) + “ Desistências”) print(str(qtd_desistencias/solicitacoes.shape[0]) + “% de desistências sobre todas as chamadas”) print(“ ”) I couldn’t find the hour yet but the SAMU takes way longer on Friday ( Good #Party :/) #TIP 1 If possible, not on friday There are many reasons for this (that I can think of but can’t proof though), I haven’t get any other dataset to do so, but it’s possible that eveyone is going out friday night, maybe it’ll be interesting to match the slowest hospitals with their location. (Does somebody dataset about Recife Traffic?) Which days have the most calls to SAMU? On decreasing order, the days that had the most call to SAMU were Sunday, Saturday and Monday. On Sunday, waiting time for SAMU was about 38 minutes in average on Sunday (removing the calls as “9999” , missing values) but it still contains the calls that were disposed. Here is the snippet to get the data: for dia, solicitacoes in solicitacoes_2015_por_dia: print(“No dia “ + dia +” ”) qtd_tempo_gasto = solicitacoes[solicitacoes[“tempo_de_transporte_minutos”] != 9999][“tempo_de_transporte_minutos”].sum() print(str(qtd_tempo_gasto) + “ total no dia durante 2015”) print(str(qtd_tempo_gasto/solicitacoes.shape[0]) + “ de minutos em média”) print(“ ”) Something relatively easy to be done, these transformations brought some interesting things, such as an example, our next question: What are the days with most give ups? And again, the champion is Sunday, with 1800 give ups, on about 16,7% of all calls on Sunday. This means that SAMU has a margin of almost 17% of WASTED CALLS. Imagine, for example, possible causes: Someone rescued before, the patient was rescued by a private company( some people call both, public and private sometimes) and “mock calls” (when you call but you’re lying). P.S.: “Mock Call” is a crime and there are still e homo sapiens that does it. On the section “Quanto custa” of the text above you can read( it is in portuguese) that a the team and equipments of a single SAMU costs 30R$ Thousands!, imagine the cost to move all of this 17% of the time. #TIP 2 STAY CALM. Analyze the situation, if it’s possible to someone to rescue faster, don’t CALL SAMU. What are the major reasons to call? Between all the patients whom were helped, these are the top 5 reasons: CAUSAS EXTERNAS 10378 NEUROLOGICA 2962 CARDIOLOGICA 2383 INFECÇÃO 2219 RESPIRATORIA 1960 I don’t know(yet) what causa externas means. Neurologia, in my head ( I haven’t interviewed anyone on SAMU to know more about it), must head injuries and impacts on car accidents. The same to cardiologia, yes, a lot of people suffer for heart strokes, mainly elderly, as I could see from the data(next section). If i stratify the data from Sex, the Top 5 modifies: Masculino: CAUSAS EXTERNAS 1024 INFECÇÃO 753 CARDIOLOGICA 735 NEUROLOGICA 732 RESPIRATORIA 683 Feminino: CAUSAS EXTERNAS 807 INFECÇÃO 605 CARDIOLOGICA 595 RESPIRATORIA 532 NEUROLOGICA 523 And the neighborhoods? There is a part of the dataset called “neighborhood_2015.csv” which describes in details the data of the colun “bairrosaude_descricao”. I just asked myself which neighborhood had the most calls: E para entender o porque, fui dar uma olhada na configuração do IBGE do bairro do CENTRO, é discrepante a quantidade de chamadas comparada à outros bairros, porém no link do IBGE sobre Recife, não existe nenhum bairro chamado “Centro”, ainda tenho de procurar a informação sobre quais “bairros” o CENTRO acopla dentro dele. Ainda acho que essa variável vai ser de grande valor para o modelo que irei criar. For the second Part On the second part I’ll show more data set transformation and the implementation of a few models to predict the variable “withdrawl”. And also metrics like correlation and how to create a ROC curve( since the it’s a binary problem). I hope you liked the post and here are my social media links: Youtube Channel debugasse: https://www.youtube.com/channel/UCey2da8VAlR--glrFCIggmA Github: https://github.com/fbormann
https://medium.com/the-data-experience/i-do-care-about-my-city-halls-data-5b0c3eccb952
['Felipe Bormann']
2016-06-14 23:25:10.898000+00:00
['Health', 'Data Science', 'Public System']
The Mistakes That Led GameStop to Its Downfall
Video games mean big money. It’s a multi-billion dollar industry that’s projected to continue growing in future years. So it should be no surprise to see companies trying to capitalize on the market. Console wars generate media coverage, new titles frequently wear a $60 price tag, and customers are always given reasons to spend more. This industry has allowed the store GameStop to thrive, but changes in the gaming industry are putting a strain on physical sales. This is going to impact brick and mortar stores, but GameStop is not a victim. They created their own problems, and they continue to sink themselves further. To understand the rise of GameStop, it’s important to understand their unique relationship with the consumer market and the economy. At the end of 2019, the company has 3,642 US locations. This number is really close to the number of US locations reported at the end of 2005: 3,624. Looking at these two data points would demonstrate a level of stability, but there was a period of growth and then a decline. At the end of 2011, the company reported 4,503 GameStop stores in the US. via ccpixs For a period of time during the end of the 2000’s and entering the 2010’s it seemed like GameStops were popping up all over the place. Every strip mall seemed to get one while some shopping malls had multiple GameStops. Perhaps this had to do with the acquisition of EB Games in 2005, but purchasing a competitor should be a strong sign of success. This period of time was also the perfect storm for the gaming market. The lates 2000’s saw the video game industry booming during the Great Recession. While gaming isn’t cheap, the amount of entertainment received for the cost will outweigh other forms of entertainment. The Recession also made real estate cheaper, so GameStop could open more stores where other stores went out of business. Further, we were in a technological sweet spot. The Xbox 360 and Playstation 3 offered great graphics, but these graphics meant physical media (Blu-Rays or HD DVDs) were necessary to hold games. While the movie and music industry was being hurt by bootlegging, gaming didn’t feel the same impact. So GameStop started thriving, but they made mistakes. Now they’re failing and can place a lot of the blame on themselves. Mistake #1: They Grew Unsustainably Gamestop really saw their biggest jump in stores between 2004 and 2005, but that’s because they purchased EB Games. This doubled the number of store under Gamestop control, but opportunity arose shortly after the acquisition. A new generation of game consoles cycled into existence in 2005/2006, the Recession made real estate affordable, and GameStop could expand further. That doesn’t mean they should grow, especially because they were already going through a transition. Eliminating EB Games meant GameStop didn’t have a major competitor specializing in gaming. They would need to compete against electronic stores like Best Buy or department stores like Target and Walmart. The same recession that would make real estate more available to GameStop would also drive customers into department stores. They were simply growing because the space was available, not because there was an opportunity to overtake more competition. Now GameStop would need to focus on its used game market to differentiate itself from these cost-friendly department stores. This would translate into behaviors that produce an unsatisfactory customer experience. Demand for video games didn’t mean demand for GameStop. The store had to offer something unique, and the quantity of stores did not translate to quality. Mistake #2: They Thought The Were Invincible Looking at the technology of the time, it was clear gaming companies were going to try to shift away from physical media. CDs were becoming obsolete, desk top computers could store games, and video game consoles were becoming more capable. One reason GameStop could open two stores in the same town was their focus on used games. This meant customers could go to the eastside GameStop and find a different used selection of games than on the westside of town. Trading in games also meant a high inventory of used merchandise, and it takes room to store that. Going digital meant less physical content. You can’t trade in a digital copy of Halo, and that makes the GameStop model fairly pointless. Why pay $60 for Halo at GameStop when you can buy it digitally for $60 and stay home? The Xbox 360 and PlayStation 3 really weren’t capable of storing a library of games. Internet speeds weren’t as great in 2006, and computer storage was more expensive, but the groundwork for a console without physical media was being established. Photo by Nikita Kostrykin on Unsplash Meanwhile, GameStop is building a market around physical media. To be fair, people did want to be able to resell games and this was made clear when the Xbox One was announced. The gaming industry was still moving to the point where digital games were becoming standard. For example, the original Wii had the “Virtual Console” which allowed gamers to purchase a library of games digitally. They were all older titles from prior systems, but this concept is designed to chip away at the used game market. Comfort with digital games would eventually chip away at physical stores. Mistake #3: The Customer Experience Got Worse Every store has its initiatives. They’re supposed to benefit the customer, but really, they exist to benefit the company. Brand loyalty can be mutually beneficial, but the GameStop experience has also been flooded with an inconstant flow of brand initiatives. When any customer makes a purchase, they’re pressured into the PowerUp Rewards program. Admittedly, this can be beneficial to the right customer, but for casual gamers or parents, it’s a barrier to making a purchase. Then customers are asked if they have anything to trade in for store credit, if they’ve considered purchasing used, want to pre-order a game, or if they would like game insurance. Now, customers are asked about trading in more than just games. They’re trying to get customers to trade in technology (most specifically smartphones). In return customers will get store credit. Trading in video games and consoles? That make sense, but no one is buying smartphones at a GameStop. If customers don’t see their trade ins on the shelves, it sensible to assume the company will reap the majority of the profit. GameStop has tried other initiatives like handing out fliers to customers in the store. It’s fairly invasive, but also easy enough to ignore. When paying customers feel pressure beyond their intended purchase, their opinions of the company will decrease. Mistake #4: They Put Profit Over Employees The pressures placed on customers aren’t just employees trying to make more money. The company has been burdening employees with unrealistic expectations. It seems like every time I hear something about GameStop, the message is followed by employee frustrations. via GameStop When the employee is overwhelmed, the customer feels it too. GameStop expects employees to hit certain metrics, and they get penalized if they don’t met expectations. There are reports of employees losing shifts and even getting fired because they didn’t hit corporate goals. Ultimately, these goals make it harder for customers to buy games. The disregard for employee wellbeing came to a head thanks to COVID-19. Initially, stores stayed open and GameStop allegedly stated their employees were essential. Even when states ordered stores to close, GameStop tried to remain open. They even told employees to wrap plastic bags over their hands as a precautionary measure, showing a lack of regard for employee safety. By most accounts, COVID-19 is not an anomaly. It focused a spotlight on a preexisting structure where corporate wanted to prioritize profits. Will GameStop Survive? In hindsight, some of the mistakes made by GameStop are understandable. When other industries struggled during a recession, gaming grew. When other media was going digital, games stayed on discs. GameStop has some reasons to believe they were different. It was really only a matter of time before the video game industry realized it wasn’t the exception. Growth can’t last forever, and eventually technology would make gaming more convenient than going to a physical store. The red ring of death. A notorious sign for a dead Xbox 360 The truth is, GameStop is declining. It seems like they’re trying to slow the bleeding, but the end is coming. It’s not because the video game industry is going away, it’s because people don’t want to deal with a store like GameStop. It’s not a pleasant experience because of the business model. Stores will continue to close, and soon malls with be filled with vacant GameStops just like they had vacant RadioShacks last decade. 2020 could have shown some temporary relief for GameStop. New consoles are slated to be released, and that could drive more traffic into the stores. COVID-19 may hurt this process. There could be delays, restrictions on customer traffic, and consumers taking their shopping online. This time, GameStop is not immune. They will feel the brunt of current changes and it will lead to their downfall. It’s only a matter of time, but soon enough, the plug will be pulled on GameStop.
https://beausoleil.medium.com/the-mistakes-that-led-gamestop-to-its-downfall-54ebb3c32a5c
['Michael Beausoleil']
2020-05-31 22:14:19.466000+00:00
['Marketing', 'Gaming', 'Economy', 'Business', 'Customer Experience']
Why Your Startup Needs Founder-Market Fit
Why Your Startup Needs Founder-Market Fit What comes before product-market fit I’m a senior Ruby on Rails engineer with a background in building enterprise software for SaaS companies. My co-founder is also a senior engineer with a background in building enterprise data processing pipelines. We co-founded a start-up to build an appointment management system for dental clinics and beauty salons. Building the MVP, around 2018, was the best experience! My co-founder had a dentist friend who was eager to use our CRM in her dental practice. She also recommended us to another dentist and a beauty salon owner, so that by 2019 we had three active businesses using our CRM, all customers acquired through word of mouth. By mid-2019, we wanted to grow our business since we had a fairly complex product already in place. This is when the founder-market fit consideration hit us. Both of us were excited to build enterprise software, but neither was interested in appointments management software. When we were talking to customers, during our sales process, you could easily tell that there was no spark of passion for solving scheduling problems. We followed Stephen Fry’s strategy for talking to customers early. There are four signs you need to follow when considering founder-market fit: Founder’s professional experience Founder’s obsession Founder’s personality Founder’s background Let’s break these down.
https://medium.com/better-programming/why-your-startup-needs-founder-market-fit-3bb38a782167
['Catalin Ionescu']
2020-11-17 17:46:29.848000+00:00
['Startup Lessons', 'Startup', 'Business', 'Startup Life', 'Programming']
What I’m Listening To Today, Episode #4
What I’m Listening To Today, Episode #4 There’s A Moon Out Tonight, The Capris I remain determined to end this week on a positive note. Last night, upon my evening arrival to the homestead, a rare letter from the tax man sat at the foot of the stairs awaiting my return. “You have paid too much tax”, it began. Wonderful. This is the best of all possible starts to the weekend. Or perhaps that is what I am supposed to think. I occasionally wonder if the tax man does this on purpose. His system of checks and balances includes over and under charging and then sending one of two templates out every six months. A system designed to raise you up, or sink you, depending on how he’s feeling. It’s a cruel game to toy with your emotions, to let you know that you are the state’s plaything. There was a time when the tax office didn’t tell you that it didn’t know what it was doing. And likewise, you did the same. It was a happy arrangement. Anyway, I’ve forgotten the letter at home. I was supposed to deal with that today, but it’s now a ‘next week’ task. Instead, I’m sleepily ensconced in my train carriage, listening to this song. It hearkens back to a simpler time, for many. The fifties were defined by: jukeboxes, formica tables, vinyl seats, poodle skirts and bobby socks, milkshakes, smoking as a healthy pastime, and drive-in movies. Johnny was going to be along any minute to take you to the dance in his Chevy Bel Air. This song was playing on the radio. Your song. You didn’t feel compelled to tell everyone you knew that you loved this song, because you only knew three people, and two of those people were your parents. The other one was Johnny, and he was probably going to ask you to marry him. Social media was still just an evil glint in Daddy Zuckerberg’s eye. What a time to be alive! Leaving aside the fact that: houses were made of asbestos, there was rampant inequality, civil defence drills, black lives didn’t matter, #YouAlso, half your classmates died from polio, and Johnny crashed the Bel Air on the way to your house and was killed instantly…the fifties were great, weren’t they? No? Well, alright then. Have it your way. No decade is perfect, excepting most of the ones that happened before we got here, I suppose. But, what got left behind…was the music, and I’m glad to have it this morning, on my way to work, colouring in the world as it passes me by. Somewhere back in time, there was a moon out, and maybe someone stopped and was moved to write a song about it. In the here and now, even if there was a moon out, you wouldn’t see it because of all the pollution, and even if you could, let’s face it, you’d be instagramming it to infinity. These are the times that we live in, and we must, for better or worse, embrace those times, providing that the embrace lasts for no longer than three ‘Mississippis’, we are wearing the approved protective clothing for the duration, and you have the signed paperwork proving that it has been independently judged safe for you to do so. Then you go home, throw your clothes in the incinerator, and bathe in disinfectant for half an hour. When the night time finally comes, and the moon is out somewhere, you can listen to this song, archived in the pleasingly sanitary environs of the digital world, and you will go to bed dreaming of a pleasant fifties Americana. On the bedside table sits an old photo, a creased but framed picture of Johnny leaning up against his ride, a blue-eyed vision from a different time.
https://medium.com/the-manic-depressives-handbook/what-im-listening-to-today-episode-4-1b4a921f0c21
['Jon Scott']
2020-07-10 10:53:30.506000+00:00
['Fifties', 'Humor', 'Blog', 'Music']
The world needs your creativity, imagination, and contribution
The world needs your creativity, imagination, and contribution A reflective exercise bridging your heart’s desires and your mind’s adventures Photo by Tetiana SHYSHKINA on Unsplash As the job markets are being transformed by Covid-19 and the bigger technological shifts triggered by artificial intelligence, many professionals are worried about their future job prospects. In times of such uncertainty, imagination and creativity are your biggest assets. They will help you navigate uncharted territories. They will also enable you to think differently and create your own creative assets. Research suggests that jobs that require creativity are less susceptible to automation and machine learning. The things that make us uniquely human, weird, and creative suddenly have become our biggest asset — precisely because these cannot be replicated by machines, robots, and algorithms. This is why industries built on imagination and creativity have recently become the fastest growing sectors throughout the world. For example, creative sectors in Britain have become more valuable (£87 billion a year) than car, oil, gas, aerospace and life sciences sectors combined. 1 in every 11 employees now work in creative sectors and roles — and this proportion is bound to increase even faster. Most innovative and cutting edge tech sectors including AI, AR, VR, 5G, 3D printing, animation, computer games, and film require high levels of creativity on a daily basis. Think about it. Some of the most valuable brands in the world are all about imagination: Pokemon, Harry Potter, Star Wars, James Bond, Marvel, Barbie, Superman, Game of Thrones, Hello Kitty, Disney, Pixar… All are entirely built on the power of imagination. In the imagination era we live in, each of us must develop ourselves as creative artists and entrepreneurs. We need to rethink ourselves as creative leaders and architects of the future. We should exploit the extraordinary opportunities presented to us by the creative and digital sectors. Creativity and imagination are the greatest assets we can develop. They are boundless. The more we create and use our imagination, the more it expands infinitely. How do we tap into our creativity? I have written a set of articles on this topic. Together, these articles provide you a lot of creativity exercises and imagination experiments you can apply in your life. Check out the collection of these exercises below: A Digital Collection of Creativity Exercises
https://medium.com/journal-of-curiosity-imagination-and-inspiration/the-world-needs-your-creativity-imagination-and-contribution-a9636fae244e
['Fahri Karakas']
2020-07-30 07:50:51.602000+00:00
['Imagination', 'Self', 'Creativity', 'Life', 'Inspiration']
Broadcasting Actions — Integrate your React-Redux App with your backend
What are WebSockets? The WebSocket API is an advanced technology that makes it possible to open a two-way interactive communication session between the user’s browser and a server. With this API, you can send messages to a server and receive event-driven responses without having to poll the server for a reply. https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API With WebSockets we have the ability to build web applications that rely on near real-time responses, while we send something, we can get it at the same time we are not doing long polling activity to fetch the data from the server, in fact, there is a pipe that we can communicate between the client and the server and this pipe is always opened. What applications can be developed with WebSockets? chats, real-time analytics websites and more. In this article I am going to show you how we can utilise WebSockets and integrate it to the React-Redux ecosystem by “dispatching” actions directly from the backend and reduce the effort of the development when it comes to event handling. Photo by Alex wong on Unsplash So let’s start with the story for this article, we have a website with support chat, the communication is done via WebSockets and the technology will be based on React-Redux. How you would design such kind of support chat? feel free to comment and share your ideas. The source code will be shared at the end of the article To build such kind of chat, we need the chat component that will handle the messages list, we need to define the redux for the chat that will store those messages, we will have the backend to setup the WebSocket server and api to trigger the broadcasting of the messages to the relevant sockets in the chat and last but not least, the client socket that will initiate the connection and will hold the reference to the store to dispatch the actions that being thrown to the system from the client to the backend and back to the client. Seems complex? Not at all! In the following, I am going to show you the code so you can understand the role of every component in our solution.
https://ofirattia.medium.com/broadcasting-actions-integrate-your-react-redux-app-with-your-backend-b097cd53b30f
['Ofir Attia']
2020-12-02 08:06:54.487000+00:00
['Express', 'Coding', 'Websocket', 'React', 'Web Development']
Dumbstruck with the sweetness of being
A late afternoon lustre, still as Sunday. I remember being in the car. Kids crammed in the back, awkward cosy redcheek slumped — full of innocent air, like stuffed toys clutching their own stuffed toys. The elastic tie with London at full stretch, about to spring back. And I remember the sun — no, the light, making carousel flashes of the day just gone: cut grass, blossom fallen around and under trees, the ripple of laughter, and of hills, birdsong. The glow of sunshine warm and soft, a gentle breath of wind, each glancing off us like it would the face of a stopped clock. Time behind us and ahead of us. It’s like the world on pause, when the light is luminous and the day tires and the shadows stretch and nature circles back to quiet. Trees lay their arms out behind their heads, reach their fingers into field corners. The hills edge downward, their day’s exploration done. And Joanna Newsom is singing: There is a rusty light on the pines tonight sun pouring wine, lord, or marrow Down into the bones of the birches and the spires of the churches, jutting out from the shadows The yoke, and the axe, and the old smokestacks and the bale and the barrow And everything sloped like it was dragged from a rope in the mouth of the south below It’s the cadence of the words that pulls you first. Falling, unhurried, assured, like the curve of the hills banking away from us. The music like birdsong — fluttering plucked wires and orchestrated curlicues. But the words. The words. They animate: sun, poured like wine. They gleam: a rusty light. They sing: bones of the birches, bale and the barrow, mouth of the south. They capture too how the faded and the dilapidated and the old can be radiated by that same lustre I remember, by nature’s indefatigable beauty. The moment I heard Ys by Joanna Newsom for the first time was a beguiling concoction of nature, stillness and memory. It is a moment I return to and a moment that returns to me. The shock of the voice, high-pitched and creaking like backwoods pine. The antique swell of the music, each song carved more ornately than the last, its edges undulating, shifting, twisting and turning, like a landscape, the lines finding their own winding, natural course as would a mountain stream. Sometimes the stream slows, the terrain evening out. Sometimes it tumbles down, a torrent under which you can only stand, overwhelmed by the cascade of words. And, Emily, I saw you last night, by the river I dreamed you were skipping little stones across the surface of the water Frowning at the angle where they were lost, and slipped under forever In a mud-cloud, mica-spangled, like the sky’d been breathing on a mirror Reach in, grab a pebble — a phrase. Turn it over. Hold it up, let the water drip. Let it catch the light. Mica-spangled, like the sky’d been breathing on a mirror. You’ll see the world looking back, made new. Emily, then, is a song made of nature, stillness and memory. But also wonder. A wonder at what we might perceive if we allow ourselves, if we gave free reign to our senses. The words, embedded in a particular landscape, in a particular experience, convey a deep, umbilical connection to someone the writer loves. The words conjure this wonder, and conjure us to see it. The words are written about and to a sister. To Emily. Anyhow — I sat by your side, by the water You taught me the names of the stars overhead that I wrote down in my ledger Though all I knew of the rote universe were those Pleiades loosed in December I promised you I’d set them to verse so I’d always remember Emily is an astrophysicist, and a teacher to her sister. Through Emily, Joanna learns about the universe. Joanna is clever too, though in a different way, a precocious girl who fashions a nursery rhyme to show her sister she understands, a nursery rhyme to contain the cosmos. In this family, within this relationship, formed against this landscape, these so-called ‘two cultures’, science and art, are equal, each offering wisdom, each a vessel for childlike wonder. Each a source of poetry. Each — when pitted against our flimsy, transient idea of everyday reality — eternal. The meteorite is a source of the light And the meteor’s just what we see And the meteoroid is a stone that’s devoid of the fire that propelled it to thee And the meteorite’s just what causes the light And the meteor’s how it’s perceived And the meteoroid’s a bone thrown from the void that lies quiet and offering to thee It’s a chorus, of sorts, and one that plays with perception and reality. This is appropriate. Somewhere among this torrent of words lies a story, something goes horribly wrong, only it’s not quite clear. Perhaps over the years I’ve been mesmerised by the the words — the words — or maybe it’s deliberately obfuscated. The main event happens off stage. We see neither bone nor the void from which it was thrown. Just the light. Luminous. But even in this nursery rhyme here we’re being misled. Joanna, still learning maybe, or simply the sister with the gift of poetic license, has switched the meanings of meteorite and meteoroid. Why we’re not quite sure. Perhaps it’s narrative detail, placed there for anyone willing to look closely enough. Newsom does tend to leave clues like this, and her fans like to look for them: crumbs in the forest she knows will be picked up and picked over, by people hungry to know where every allusion leads, or where it came from. Follow the trail online and you’ll find forums full of theories, shared by people who scour lyrics and record sleeves, look for references in videos. Obsessives, looking for the meteoroid. I think they should be content with just the light. Those fans will tell you, though, that Emily, the song, contains a mystery. A tragedy. In search of the midwife who could help me And my clay-colored motherlessness rangily reclines For some, the mentions of midwife and motherlessness suggest a miscarriage. Certainly the second half of the song appears to be about something going wrong for Joanna. As a result the sisters’ relationship turns from teacher and pupil, to nurse and patient. Emily’s support is unyielding, but there’s a suggestion that she unwittingly makes things worse for her sister as she exposes an intensely private experience. You came and lay a cold compress upon the mess I’m in Threw the window wide and cried, “Amen! Amen! Amen!” The whole world stopped to hear you hollering You looked down and saw now what was happening The lines are fading in my kingdom. Things fall apart. Her surroundings turn. The community she lives with make her the centre of attention. The talk in town’s becoming downright sickening Joanna characterises the townsfolk as gossips, uncivilised. The muddy mouths of baboons and sows and the grouse and the horse and the hen She even acknowledges that how much that deep connection we have with our family can hurt us. The ties that bind, they are barbed and spined, and hold us close forever But eventually, things even out. We pause. The torrent becomes a stream again, the stream slows to a trickle. There’s a call to return, and naturally a reunion between the sisters is marked by a reunion with everything else that’s important. They go back to the landscape. Back to nature, stillness, memory. Come on home, the poppies are all grown knee-deep by now Blossoms all have fallen, and the pollen ruins the plow Peonies nod in the breeze and while they wetly bow With hydrocephalitic listlessness ants mop up-a their brow And everything with wings is restless, aimless, drunk and dour The butterflies and birds collide at hot, ungodly hours It’s beautiful. Nothing dilapidated here, nothing sloped. Everything’s happening at once. Nature seems drunk on itself, a mesmerising, fizzing cycle of countless microscopic details. And somewhere inside what we’re shown, hidden, or perhaps hovering among it like dust motes caught in the sun, no — the light, or the lustre of a late afternoon — is that feeling again. A calm. The same residual stillness from the beginning of the song. Time behind us and in front of us. Light bouncing off a stopped clock. We’re back at the end of the elastic. When everything makes sense and nothing matters. We could stand for a century, Staring, with our heads cocked in the broad daylight at this thing Joy, landlocked In bodies that don’t keep Dumbstruck with the sweetness of being And I remember again the light. I remember being in the car. I remember a sense of awe, at the eternal, fleeting, profound, meaningless beauty of it all.
https://medium.com/a-longing-look/dumbstruck-with-the-sweetness-of-being-a6d3ac8a1533
['James Caig']
2018-08-07 16:42:34.871000+00:00
['Lyrics', 'Poetry', 'Music', 'Joanna Newsom']
The nuclear family, and what’s wrong with everything
David Brooks hits the nail squarely on the head Delighted to see this amazing piece by David Brooks in The Atlantic. During this period [between 1950 and 1965], a certain family ideal became engraved in our minds: a married couple with 2.5 kids. When we think of the American family, many of us still revert to this ideal. When we have debates about how to strengthen the family, we are thinking of the two-parent nuclear family, with one or two kids, probably living in some detached family home on some suburban street. We take it as the norm, even though this wasn’t the way most humans lived during the tens of thousands of years before 1950, and it isn’t the way most humans have lived during the 55 years since 1965. Today, only a minority of American households are traditional two-parent nuclear families and only one-third of American individuals live in this kind of family. That 1950 – 65 window was not normal. It was a freakish historical moment when all of society conspired, wittingly and not, to obscure the essential fragility of the nuclear family. It was fragile for many reasons, an important one of which is that it relied on women being home to look after husband and kids. Clearly, that’s not a proposition many people want to buy today. I mean, sure. Some do. Some women are perfectly happy, fulfilled and content being a stay-at-home wife and mother. Which is fine. Everyone deserves the happiness that suits them. It’s just that you can’t force women to stay home in order to re-create an 1950s ideal that was itself a freakish blip in the history of humanity. I’ll say it slowly: The nuclear family, the mom-dad-and-two-kids-in-the-burbs, was a massive sociological and cultural mistake. No, not your own personal family. I’m sure you’re fine. That’s not what I mean. I mean society’s focus on having the nuclear family be the centre of everything, which is something some lawmakers and most moralists try to move us back to because they have happy memories of their own childhood or something.
https://medium.com/age-of-awareness/the-nuclear-family-and-whats-wrong-with-everything-b20e7ef5fd3a
['Brigitte Pellerin']
2020-02-20 23:36:46.323000+00:00
['Openness', 'Society', 'Tolerance', 'Family', 'Children']
Getting started with next js
Working with react js is fun until you have to do some SSR (Server Side Rendering) or SEO (Search Engine Optimization) for your application. When I created my first application in React js I got into a problem with SSR because my backend was in WordPress. There wasn’t much information on how to do SSR with PHP in the backend at that time. SSR is required to do SEO for your application and there wasn’t any chance to do that with a PHP application so I end up generating static HTML for my React application. I will cover that in another tutorial 🙄. So SSR was a big deal until I got to know about Next.js. Next.js is like React js on steroids. It supports bulks of feature like Support SSG (Static Site Generation). Support SSR (Server Side Rendering). Built-in CSS and Sass support. Fast Refresh. Code-splitting and Bundling File-system Routing (Every component in the pages directory becomes a route). directory becomes a route). Built-in Image optimization So using Next.js alone can skip the most popular modules for React js like react-router, react-snap, helmet, node-sass, etc in your project. The coding structure in Next.js is the same as we use React.js just the folder structure is somewhat different and you can easily migrate your React.js application to Next.js. So now let us see how to use Next.js for our next project. Requirements Node.js 10.13 or later macOS, Windows (including WSL), or Linux. Setup To create a project open the terminal and run the below command npx create-next-app projectName # or yarn create next-app projectName 2. Start the server npm run dev #or yarn dev 3. Open http://localhost:3000 in your browser Creating Route Creating routes in Next.js is comparatively easier than React.js. Let see how to to do that. Open pages directory and create an about.js file. Now add some code inside that file. 3. Now visit http://localhost:3000/about in your browser. About route has been created. For creating dynamic routes. 4. Create a directory inside the pages directory with the name blog and then create an index.js file inside the blog directory. This will create a route for the blog. Add some JSX to the index.js file. 5. Now create a directory with the name [id] inside blog directory to create dynamic routes inside the blog. Now create an index.js file inside the [id] directory and paste the below code. 6. Now you can visit http://localhost/blog/anynamehere to check the dynamic route. Data Fetching There are different methods in Next.js for data fetching. Each method has its own uses and advantages. Below I will show you the simplest way of Data fetching which will also be useful for SSR. For this, I will use a third-party API source from jsonplaceholder.typicode.com. Install Axios for data fetching in your project. npm i axios #or yarn add axios 2. Now open the index.js file from the pages directory and remove everything and then paste the below code. 3. Start your server npm run dev and visit http://localhost:3000 to see the changes. All the users name will be printed and if we check the source using the chrome dev tool we will see that the page is populated with the data which is essential for SEO. Error Page Create _error.js file inside the pages directory. Now paste the below code inside that file. 3. Now visit any random page which doesn’t exist and it will show an error message. Built-in CSS Component-Level CSS Create a components directory in the root folder and then Create a card directory inside that components directory. Now inside the card directory create two files Card.js and Card.module.css Now in your Card.module.css paste the below CSS. 4. Then paste the below code inside the card.js file. You can see that I have imported the CSS as styles and then passed that as className to the div of the card calling .card style from the stylesheet. Now you can call this card component anywhere in your project as I have imported it into our app.js file. Note: Next.js supports CSS Modules using the [name].module.css file naming convention. CSS-in-JS Create a Button.js file inside the components folder and paste the below code. The above CSS will be scoped only to the button and will not apply to any other elements. This was you can scope the styling only to the element. There is also another style option that will apply the CSS globally but then you won’t be able to use scope base styling. <style global jsx>{` //your css here `}</style> That’s it for today. There are lots of more possibilities you can do with Next.js which I can’t explain in a single tutorial. I’ll try to cover more topics on Next.js in my future tutorials. Till then enjoy reading my tutorials. Below I have shared Live code and GitHub repository for reference.
https://medium.com/how-to-react/getting-started-with-next-js-321cafa758ec
['Manish Mandal']
2020-12-16 19:21:52.436000+00:00
['React', 'Reactjs', 'JavaScript', 'Nextjs', 'Axios']
Tips on designing for AR/VR (with examples)
A few tips on designing for AR /VR with examples from my experience. Over the last few years I have designed and prototyped multiple AR /VR experiences. Having worked with 3D for some time and being interested in immersive digital experiences, AR/VR was an obvious choice for me. I started off creating simple experiences and eventually moved into creating full scale products. Although not an all-inclusive list, following are some design tips from my experience with examples from my projects: Tip 1: Remind the user regularly where they are in an experience One of my earlier projects on the field of *R was a VR project — a guided walkthrough of historical sites for students in rural schools. The Google Cardboard had just released during that period. I used the Google camera app to capture 360 photos of a historical site and labelled them with the local Telugu language. These photos were used to create a step by step walkthrough of the site along with audio narration, again in Telugu. Presence in terms of location, status, remaining content etc. was found to be very important to a VR experience. One of the key learnings from the intial stages of this project was that the students had trouble learning the relative locations in the walkthrough, ie. they were not able to memorise what exactly comes after a particular doorway. I solved this by using an interactive map inspired from popular games. The introduction of the map helped establish relation between locations much better and made it easier t for the students to remember relative locations. Thus, reminding the user where they are and what are nearby or reachable or what’s next is very important in any immersive experience. Tip 2 — Increase immersion by including other human senses One way to increase immersion is by including other human senses in the experience. I did a project called fiSeeyo marrying Haptics with VR. It was a remote physiotherapy solution for upper limb motor function enhancement. A doctor could remotely use a 3D mouse to give physiotherapy exercises for the upper limbs of a patient. The patient on the other side of the system uses VR and a Haptic pen to perform gamified tasks prescribed by the doctor. Haptics and VR seemed a perfect match since manipulation of objects with different weights and placement based on touch feedback — like putting a box in a hole - was much more intuitive with the Haptic pen and the cardboard headset. Adding the extra sense of touch made it not only beneficial but aided in the manipulation of 3D objects. Thus it helped in increasing the involvement of the user in the experience. Tip 3– Minimize user movement An interesting problem that I had to tackle was how to rotate an object in 3D. During my time in Adobe I did a project on sculpting in Augmented Reality. (It went viral as one of the first sculpting experiences in AR) . The object to be sculpted was a sphere. Since it was designed for AR the user could go around the sphere and sculpt it. This seemed well and good in the beginning, but then it started weighing down on the users’ experience as fatigue set in. I realised that moving around the object can be used minimally for something like visualisation. If the user has to work on the object continually, we should give an option to rotate the object without the user having to move. This is when I introduced something called an “orbiter”. This is a circle at the bottom right corner, which on-tap will centre the sphere. On dragging the orbiter, the object in focus will rotate. Calibrating some of the control variables I got a zen setting which made it smooth to rotate and sculpt the object. The users loved it! I found that this could also be used in 3D applications to manipulate 3D views as well. Tip 4 — Use intuitive gestures I absolutely love gestures. They are some of the most expressive ways of interacting with technology (maybe after voice). But they should be used minimally and should be made as intuitive as possible. Ameyt World was a project for assembling 3D blocks together and giving them interactivity in mobile AR. It was more like legos with actions (yes, Minecraft AR stole my thunder). I started off by using gestures for manipulating the view in the creation mode. This included two-finger pinch-zoom, one finger drag to rotate and two-finger movement for panning. But the problem was that users were accustomed to panning using one finger (eg. Google Maps). Rotation became a new interaction. So I decided to follow the popular model and gave one finger drag to move and introduced two finger to rotate. This led to another problem. Building a new block was done by tapping on an existing block. Thus a single finger swipe became a tap first (which built a block) and then drag. On top of this, there was another problem. I let users rotate a block by swiping it: So, tap drag erroneously built or rotated a block. This led to a very exciting design-engineering challenge to find sweet spots in dragging gesture time, building gesture time and block rotating gesture time. Since I was designing and developing at the same time, I could go through multiple options with users and find the sweet spot. Adding a UI element could have solved this problem easily, but I did not want to clutter the UI and overload users (esp. beginners). The final solution which had lesser but intuitive gestures worked well and was intuitive even for beginners: Thus, at many places where gestures are implemented, its about finding a right balance between the number and type of gestures and UI elements. Tip 5- When AR is limited, think of VR For pre-visualisation and planning of movie shots, I did a project which helped movie directors plan a shot in AR at a location. A director could arrive to a location and using a mobile device place props, lights, camera angles on the location or even leave audio or written notes in space in Augmented Reality. This helped them plan a shot and pass it on to his/her colleagues as instructions. The problem came when we realised that the location is not available for a long time during planning stages. This is when we introduced a VR mode in the AR application where the 360 view of the location could be captured and the user could later plan shots at the comfort of their homes. This solution could also be adopted for other applications like furniture placement, interior designing etc. Bonus Tip - Fly high in the sky A fish swims in the water, a lion roams the jungles and a bird flies high in the sky. Design for context. Try to leave those 3D models with animations in their actual environments. Yes a bird would look cool flying inside your office room, but it would look even better and natural flying high up in the skies. Also use elements creatively to accomplish a purpose. The clip above is from a project called Aquila. The 3D model of an eagle (rigged and animated) responded to voice commands like “have food”, “fly away” etc. similar to some of the existing pet apps. But unlike other pet apps, this app was an interface for flying a Drone. When the command “be my eyes” is uttered to the eagle, the feed on the mobile device was changed to the Drone’s feed. Hence the metaphor of the eagle flying and being the eyes in the sky fit well and helped accomplish the task of initiating the drone feed creatively with AR. And of-course the AR eagle flying in the sky looked more natural.
https://uxdesign.cc/design-for-ar-vr-8713bb54da72
['Fabin Rasheed']
2019-08-19 01:01:36.584000+00:00
['Design', 'Augmented Reality', 'Virtual Reality', 'Technology', 'UX']
React Icon System
Icons play a crucial role in interface design. They can certainly be used as visual embellishments, but they are quite often able to convey their meaning without additional text, making them a handy tool for designers & developers. There are many different ways to build icon systems. In the past, I have written about a sprite based technique. Since then, tooling has matured and there are better approaches. This article will show you how to set up an icon system using SVGR — a tool for transforming SVGs into React components. Prepare the SVG Files Our starting point will be SVG files — one per icon. You will likely use design tools like Figma, Illustrator or Sketch to create these. When designing these icons, consider using a consistent artboard size. This ensures that all icons follow the same layout rules and can be used interchangeably. You should also consider adding a bit of padding to your artboard to keep the icon content visually centred. Artboard size, live area and padding Generating the Icon Components SVGR converts SVG files into React components. It is available as a Node library, a CLI tool and a webpack plugin. Create React App comes pre-configured with SVGR. You can import an SVG file and use it as a component. This is a great start. It reduces the effort required to use SVGs with React. import { ReactComponent as Logo } from './logo.svg'; function App() { return ( <div> {/* Logo is an actual React component */} <Logo /> </div> ); } By using the SVGR CLI, you can customize the component generation and improve your workflow further. You can provide a custom template for component generation and even transform the SVG itself. To start, install the CLI using: $ npm install @svgr/cli --save-dev To create an icon, run: $ npx svgr --icon --replace-attr-values "#000=currentColor" my-icon.svg Notice, the --icon flag. It performs a couple of important tasks for us: It sets the width and height values to 1em to make the SVG scale with the inherited font-size. It preserves viewBox to ensure that the SVG scales with the correct aspect ratio. The --replace-attr-values "#000=currentColor" flag replaces the chosen color with currentColor , allowing you to control the icon color using the font-color CSS property. Behind the scenes, SVGR also uses SVGO to optimize the SVG file before converting it into a component. This is a sample of what you can expect the output to look like: // MyIcon.js import * as React from 'react'; function SvgMyIcon(props) { return ( <svg width="1em" height="1em" viewBox="0 0 24 24" fill="none" stroke="currentColor" strokeWidth={2} strokeLinecap="round" strokeLinejoin="round" {...props} > <path d="M22 12h-4l-3 9L9 3l-3 9H2" /> </svg> ); } export default SvgMyIcon; To transform an entire directory of icons, use: $ npx svgr --icon --replace-attr-values "#000=currentColor" -d icons icons I generally treat these generated icon components as build artifacts. All the SVG files live in an icon directory, and the .js files within that directory are ignored by git. Then use an npm task to generate the icon components at build time. // package.json { ... "scripts": { "start": "react-scripts start", "build": "react-scripts build", "prebuild": "npm run icons", "test": "react-scripts test", "eject": "react-scripts eject" "icons": "svgr --icon --replace-attr-values '#000=currentColor' -d src/icons src/icons" } } Customizing the Icon Component You can provide a custom template to modify the generated component code. The template below creates an icon component that leverages styled-components to control its styling. // icon-template.js function template( { template }, opts, { imports, componentName, props, jsx, exports } ) { const styledComponentName = 'Styled' + componentName; return template.ast` ${imports} import styled from 'styled-components'; const SVG = (${props}) => ${jsx}; const ${componentName} = styled(SVG)\` display: \${(props) => (props.block ? 'block' : 'inline-block')}; font-size: \${(props) => (props.fontSize ? props.fontSize : '16px')}; color: \${(props) => (props.color ? props.color : '#ccc')}; vertical-align: middle; shape-rendering: inherit; transform: translate3d(0, 0, 0); \`; export default ${componentName}; `; } module.exports = template; For the Radius design system, we are using TypeScript, styled-components and styled-system. Our custom template generated icons that are correctly typed and appropriately connected to our design tokens. Compound Icons Component We can further simplify the icons’ usage by combining all the generated icons into one compound Icons component. import { Icons } from 'ds'; export const App = () => ( <> <Icons.Airplay aria-title="airplay the video" /> <Icons.AlertCircle aria-title="error" /> </> ); SVGR allows us to specify a custom index template. This template is used to generate the index.js file when transforming a directory of SVGs. The following template generates a compound component. // icon-index-template.js const path = require('path'); function indexTemplate(files) { const compoundExportEntries = []; const importEntries = files.map(file => { const componentName = path.basename(file, path.extname(file)); compoundExportEntries.push(componentName); return `import { default as ${componentName} } from './${componentName}';`; }); return `${importEntries.join(' ')} export const Icons = { ${compoundExportEntries.join(', ')} }; `; } module.exports = indexTemplate; It adds an import statement for all the components, generates a name for the component based on the file name and finally combines them all into the Icons object. // src/icons/index.js import { default as Activity } from './Activity'; import { default as Airplay } from './Airplay'; import { default as AlertCircle } from './AlertCircle'; import { default as AlertOctagon } from './AlertOctagon'; export const Icons = { Activity, Airplay, AlertCircle, AlertOctagon, }; And here is the final version of the npm task: // package.json { "scripts": { ... "icons": "svgr --icon --replace-attr-values '#000=currentColor' --template icon-template.js --index-template ./icon-index-template.js -d src/icons src/icons" } } SVGR is a fantastic tool. You can use it out of the box with Create React App. You can also customize it to better fit your workflow. The full code for this example is available here. For a more complex example, I recommend checking out the Radius source-code.
https://medium.com/javascript-in-plain-english/react-icon-system-4ec388ed24d5
['Varun Vachhar']
2020-06-12 16:30:50.590000+00:00
['JavaScript', 'React', 'Icons', 'SVG', 'Programming']
Why the Left Never Offers Solutions to Problems
We never hear specifics from organizations like Black Lives Matter to solve racism Photo by Julian Wan on Unsplash Organizations like Black Lives Matter are great at pointing out problems they think exist but never offer logical solutions with specific goals and metrics. We’re told by BLM, Antifa, and grifters like Robin DiAngelo there’s “systemic” and “institutional” racism. Racism is everywhere and all White people are racist! Activists get away with making racist accusations against a particular group without any consequences. As long as you make racist accusations against certain groups of people, it’s ok. The race industry never comes up with specific solutions that are logical with defined metrics and goals. The most infamous idea is defunding police departments which hurt the poor inner-city minority groups who get victimized by intraracial violence within the community. BLM or the media never cover talk about those parts of the issues. People who the industry deems “problematic” can’t ever rid themselves of their bestowed original sin of racism given by the anti-racist movement which accuses an entire race of being racist. There aren’t solutions to rid people of their supposed inherent racism. How do we solve the problem with no solutions to do so? As long as you make racist accusations against certain groups of people, it’s ok. There’s no sense of urgency from the Left to define actions that would eliminate all racism. If that was their real mission, it would be a noble effort. The Left needs racism to exist. If it isn’t a problem, a big part of their movement goes out of business. The 20k corporate speaking gigs for Robin DiAngelo and Ibram Kendi must go on! Therefore, we only hear how terrible and doomed certain people are.
https://medium.com/common-sense-now/why-the-left-never-offers-solutions-to-problems-248c266bde5f
['George Chambers']
2020-12-07 22:04:32.981000+00:00
['Diversity', 'Society', 'Equality', 'BlackLivesMatter', 'Racism']
The Founding Father of Bling
In 1987, a decade before the term “bling” invaded popular vernacular, underground hip-hop star Biz Markie came to upstart jeweler Jacob Arabo with an unusual request. Biz wanted a signature piece, something so big and eye-catching that it worked as a costume: in other words, something fans could see from the stage. Arabo appreciated the challenge and got to work, designing the now iconic four-finger ring that spelled “Biz” in diamond-studded script. “Nobody back then would make something like that,” Jacob recalls. The performer loved it, wearing it not just at concerts but also for photo shoots and on the cover of his hit single “Just a Friend” that released the next year. The “Biz” ring had become part of Markie’s signature look and Arabo had unknowingly discovered his signature style. After that, Arabo’s custom business took off. He made his clients comfortable: despite his exquisitely tailored suits and throwback pomade waves, he, like most of the rappers and athletes he catered to, had the faint shadow of the outsider, a memory of the time when the path to success was still poorly lit. But where had this renegade jeweler gotten his start? In 1979, a young man named Yakov Arabov immigrated to New York City with his family. They were Bukharan Jews from Uzbekistan, then a part of the Soviet Union, and they arrived in the States with very little money. Arabov, just fourteen years old, enrolled in high school but felt compelled to help his parents with their financial troubles. In the United States, the young teenager wanted to build a professional career even before he graduated. He considered becoming a photographer, or a hairstylist, but he signed up for a six-month government course in jewelry design. After learning how to use the tools of the trade, Yakov Arabov anglicized his name to Jacob Arabo, left school, and found a job at a jewelry factory where he earned $125 a week (approximately $338 today). It was a good salary for a sixteen-year-old boy, but Jacob wasn’t there for just lunch money; he had grand ambitions of earning enough to take care of his entire family. Within a few years, by the time he was twenty, he quit and opened his own retail stall on Sixth Avenue and Forty-Seventh Street, in the heart of New York’s diamond district. Arabo displayed his creations in the window, which immediately stood out from the others on the block due to the unusual size and scope of his designs. His competitors told him he was wasting his time with showpieces; rather, regular income came from walk-ins, usually men in the market for diamond solitaires and young couples shopping for wedding rings. But Arabo hadn’t come all this way to spend every day dropping white diamonds into knockoff Tiffany settings. He remembers: “Before I went into the business my friends told me — don’t. What are you doing, there’s sharks all around you, they’re gonna eat you alive. I said, I guess I have to become a shark.” Jacob Arabo had already cut his teeth. The next step was learning the hunt.
https://medium.com/cuepoint/the-founding-father-of-bling-6d31d01fe0de
['Rachelle Bergstein']
2016-09-16 16:47:33.838000+00:00
['The Bookshelf', 'Hip Hop', 'Diamonds', 'Music']
US Proposes Another Sale Of Weapons To Taiwan Amidst Tension With China
The Trump administration has proposed yet another sale of weapons to Taiwan, this time involving a $2.37 billion Harpoon defence system by Boeing. This comes in the immediate wake of the threat of sanctions imposed by China on all companies involved in such deals, including Boeing. China has recently expressed its strong disapproval of any such deals between the US and Taiwan, referring to them as actions that “severely undermine China-US relations” as well as the “peace and stability across the Taiwan Strait.” “The United States maintains an abiding interest in peace and stability in the Taiwan Strait and considers the security of Taiwan central to the security and stability of the broader Indo-Pacific region,” the State Department said. It said the sale would not alter the military balance in the region.
https://medium.com/australians-news/us-proposes-another-sale-of-weapons-to-taiwan-amidst-tension-with-china-bbdd3b322132
['Steven Psaradakis']
2020-10-30 08:24:03.913000+00:00
['Society', 'Taiwan', 'War', 'USA', 'China']
When Sacredness Was Endemic to All Things
Pavel Jedlicka @freeimages.com I will decide to love the empty book of your body, the blank pages of sages hermiting stories deep within you. The ones I will never come to know because they inhabit the mystery that animates your soul. No matter what kind of degree in advanced humanity I achieve, I can never graduate into the school of your secrecy. Not the secrecy that comes from willful withholding, but the secrecy of what can never be known of you, even by you. I once heard that a couple had this line in their wedding vows: “I promise to never know you.” This was their way of putting the mystery I speak of at the center. For to thoroughly know something is to lose sight of it altogether. This is how they healed the damage of divinity being separated from nature. This is how they used their love to reach back to an earlier time, a time before the divorce of matter and spirit, when sacredness was endemic to all things. This is how I will now hollow out a tree to enunciate love. This is how I will soak all the fiber and form it into the empty pages of your precious singularity, so I never, ever, forget your story of eternity.
https://medium.com/100-naked-words/when-sacredness-was-endemic-to-all-things-e22962f1be88
['Samantha Wallen']
2017-07-11 06:01:00.568000+00:00
['Love', 'Soul', '100 Naked Words', 'Poetry', 'Creativity']
The Privacy Paradox: Government Regulation and the Erosion of Technological Transparency
Photo by ev on Unsplash The era of self-regulation for the technology industry appears to be over. In its place, a series of new government regulations are being implemented. The European Union (EU) introduced its General Data Protection Regulation (GDPR) in 2018 to ensure appropriate levels of privacy for its citizens and transparency from technology companies. More recently, industry leaders have advocated for similar standards to be applied globally. The issue for technology companies, practitioners, and users is that transparency and privacy are highly subjective terms that can serve contradictory ends. Technology companies with valuable intellectual property and lofty growth targets are motivated to maintain narrow definitions of transparency and privacy. The more user data they can collect, utilize, and monetize the better. In a self-regulated environment, these companies have created their own transparency and privacy policies, signalled their virtuosity by publishing reports (see “Our Continued Commitment to Transparency”), and faced little consequence if any for policy violations. As such, a generation of technology companies has achieved hyper growth while their innovations and business models outpaced the law and lawmakers. If not from within, effective regulation must come from without. Users of technology products, on the other hand, hold much broader definitions of transparency and privacy. Some want to know how their personal data is used by a company, how it affects content or features they’re exposed to or the rationale for decisions made about them by automated systems. However, the rights of users have been opaque and their options for recourse uncertain. This is changing, however, with the GDPR. Through it, the EU has introduced broad rights for users (e.g., rights to an explanation and to human intervention). At least in Europe, the lawmakers are catching up. The technology industry has been quick to respond. Facebook CEO, Mark Zuckerberg, published an op-ed in the Washington Post (see “The Internet needs new rules. Let’s start in these four areas”) where he implored lawmakers to introduce a “globally harmonized framework” and stated that it would be “good for the Internet if more countries adopted regulation such as GDPR as a common framework”. Perhaps surprisingly, Facebook is leading the charge to regulate the internet. Then, at the recent Facebook F8 conference, Zuckerberg declared, “The future is private”. At the conference, he showcased the vision of privacy-focused Facebook, Messenger, WhatsApp, and Instagram products. To achieve this level of privacy, Facebook is redesigning its products with end-to-end encryption. Paradoxically, this positive orientation towards privacy has a correspondingly negative effect on transparency. Rather than newly transparent, the next generation of Facebook products will be completely opaque to users, regulators, and within Facebook itself. Facebook’s call for internet regulation and their pivot towards privacy-focused products introduces at least three ethical dilemmas. With end-to-end encryption, Facebook absolves itself of any moral responsibility for the content and messages distributed within its platform and avoids the legal complication of being classified as a media company. Further, end-to-end encryption means that core components of Facebook’s products and their data remain out-of-reach of GDPR-like regulation and its requirement for transparency. Finally, as technology companies and governments are busy outwitting each other, users remain without applicable regulations and actionable rights. If the future is private, it will be even less transparent.
https://medium.com/ethicism/the-privacy-paradox-government-regulation-and-the-erosion-of-technological-transparency-35fbcf2ca229
['Jordan Eshpeter']
2019-05-16 23:22:42.414000+00:00
['Transparency', 'Facebook', 'Gdpr', 'Tech Ethics', 'Privacy']
Venture Capital 2.0: This Time, It’s Different?
Venture capitalists are partying as if it’s 1999 — and we know how that went. Entrepreneurship 101 says startups should conserve cash, keep costs flexible, and operate with parsimony. Way too many founders in the 90’s laughed at that — the same way they’re laughing today. The Money Keeps Pouring In, But It Keeps Pouring Out, Too Investment in venture capital in 2018 surpassed $100 billion for the first time since 2000. With all that money sloshing around, the deals are getting bigger. In 2018, deals worth more than $25 million accounted for nearly 13 percent of all investment. And mega-rounds, defined as funding rounds worth more than $100 million, accounted for 47 percent. In the midst of all this exercising of free-flowing pens, I have to wonder whether all those people putting all that money into so many me-too, growth-at-all-costs-we’ll-worry-about-profits-later ventures are a tad delusional. Some of the assumptions that would have to be true for so many businesses to become profitable, self-standing enterprises without continuous infusions of cash are truly heroic. Consider the case of Moviepass, the long-struggling venture that hoped to “disrupt” (oh, that word) the conventional movie business by allowing customers to go to the movies in theaters for a fairly low subscription price. Majority owner Helios & Matheson, who bought into the company in 2017, went for subscriber growth by offering virtually unlimited access to movies for a monthly subscription price of $9.99. Customers loved it — after all, that price was lower than it would cost to see just one movie in some theaters. And if subscriber growth was what the company wanted, this strategy succeeded wildly — going from 20,000 users to over 3 million at its peak. Helios and Matheson’s stock soared and for a few shining moments, on paper, everything looked great. But, and it was a big but, the economics of this proposition simply didn’t work. The company paid full price for every customer ticket, so it lost money on every transaction. It was never able to persuade theaters to share concession revenue and the hoped-for play of being able to sell consumer data gleaned from customers never worked out, either. From an entrepreneurship professor, people, it is really hard to build a platform on which you lose money on every transaction and hope to make it up somewhere else in your business model. Meanwhile, as often is the case with failed ventures, Moviepass did teach cinemas that there is an interest on the part of consumers in a subscription business model for movies. AMC has launched its own, AMC Stubs A-List, which allows subscribers to see up to three movies a week for $19.95 a month. People have flocked to it, taking more than 175,000 initial subscriptions and surpassing the company’s expectations. Which brings us to a second lesson (the first being don’t price below cost), which is that it is rare that being the first mover creates an indestructible position. It can happen, but typically only when you have one of two things going for you. The first is network effects (the more users you have the more valuable the offering becomes to other users, so the first to get going are impossible to catch — think Facebook). The second is ecosystem effects (when you occupy a protected role in a business configuration — think Windows software and Intel chips). All too often, first movers just show sleepy incumbents what customers really want. When the Empire Strikes Back, the upstarts get squashed. Too Many Unicorns As Steve Blank points out, in VC 1.0 founders and employees had the same horizon for a liquidity event six to eight years from startup. Today, as companies can capture $50 million-plus funding rounds, they can put off going public for a decade or more. This fundamentally changes the entrepreneurship game. It puts early employees at a disadvantage. It encourages sloppiness. It leads to company valuations that have nothing to do with profitability and everything to do with hyper-growth. And it can’t last. Well, it can as long as venture capitalists value growth and some mythical future exit event as highly as they do. The cynical among us observe that some of their behavior is something like a Ponzi scheme — if you are an early-stage investor and can persuade later-stage ones that this business is going to be the next Big Thing, you can get your money out even if the whole thing eventually implodes. The problem with this is that it removes critical discipline from the investing process. Companies attract funding because of a charismatic CEO, the promise of being first in a hot new category or cool technology, and not necessarily because the fundamentals of the business make sense. We’re starting to see some rationality about this creeping in around the edges. Take Uber, whose theory of success (at least for now) is that it will dominate local markets for both drivers and riders eventually. If you believe that, then it’s worth subsidizing both sides with venture money. Uber may well be Exhibit A of the mythical first-mover advantage illusion. In just three months, Uber lost over $5 billion. The real problem here is one that we’ve seen before — to seed a market, a startup subsidizes early customers. The theory is that once you have them in the door, you can eventually create pricing power and raise prices. Eventually, unless you have some other revenue stream like darkly trading in people’s personal information, you have to charge enough to cover the cost of the service and make a profit. Once those $7 Uber rides start costing $30, riders will be back in their own cars or on the bus. Another “what were they thinking?” example? E-cigarette maker Juul. With investments from Big Tobacco (already a red flag) and a ticked-off FDA disputing its claims that its products are healthier than regular cigarettes, its path to profitable growth doesn’t look like the smooth sailing someone must have thought it would have, since it has raised $14.4 billion over eight funding rounds. It’s also brazenly taken pages out of cigarette marketing of yore — trendy models, lax enforcement of age restrictions, seductive flavors, and a shimmering sense of being cool — many of which are today illegal. The backlash among regulators and policymakers is only just beginning. Still Very Much a Boys Club So here’s an interesting irony. Despite the enormous sums in play, little goes to female and minority founders. In 2018, all female founders put together received $10 billion less in funding than Juul took in by itself. The amount that went to female-founded companies was 2.2 percent of the total invested. If you assume that entrepreneurial talent is evenly distributed across the population, this seems like a glaring blind spot. Women, after all, make an awful lot of spending decisions. In the United States, women owned 11.6 million businesses employing over 9 million people and generating $1.7 trillion in revenue. And a Harvard study found systemic differences in how venture capitalists talked to female vs. male founders. The men got asked mostly about their vision and the upside of their businesses. The women? Only the pitfalls and risks. And funding follows this defeatist impulse. In a study by Morgan Stanley, they blame a lack of familiarity, as most investors have little to do with non-male, non-white founders. The cost? $4.4 trillion in missed opportunities. It’s time for a rethink.
https://medium.com/swlh/venture-capital-2-0-this-time-its-different-d748d524f4d4
['Rita Gunther Mcgrath']
2019-10-12 10:20:33.493000+00:00
['Business', 'Venture Capital', 'Finance', 'Entrepreneurship', 'Startups']
Geofencing: Impact, Spending, Limitations, and Examples for Marketers
Geofencing: Impact, Spending, Limitations, and Examples for Marketers How do you strategically acquire customers with digital tools in a specific area? Geofencing — also called location-based marketing — is a type of mobile ad targeting that involves a two-step process as follows: Mapping out a specific geographic area where you want digital ads to appear. Triggering the ads when someone enters the virtual fenced-in area. Geofencing goes beyond standard geographic targeting by homing in on very specific locations down to streets — or even buildings like malls or restaurants — and serving ads to consumers who enter those “fenced in” areas. The ads themselves only show when someone crosses the virtual fence. They manifest in a variety of ways, including alerts on smartphones or internet-enabled devices, ads on digital billboards, and app notifications. Keep in mind that geofencing relies on app notifications and GPS information, both criteria that consumers must opt into before they can receive the ads. A creative example of a successful geofencing campaign was Burger King’s Whopper Detour, a campaign that resulted in millions of downloads of the Burger King app and over a half a million coupon redemptions from the promo. In a move that can be construed as either brilliant or evil (or a bit of both), Burger King targeted consumers who were either inside a McDonald’s restaurant or within 600 feet of one. When a consumer entered one of the geofencing locations, the Burger King app pushed a coupon notification that enabled the user to get a Whopper at Burger King for just a penny.
https://medium.com/better-marketing/geofencing-impact-spending-limitations-and-examples-for-marketers-7c74fb9860fa
['J. M. Dooley']
2020-02-20 13:57:13.854000+00:00
['Marketing', 'Technology', 'Digital Marketing', 'Advertising', 'Business']
【📣資料工程師來開講】在求職市場我有價值嗎?如何歸納經驗 成功攻略跨國新創
We aim to inspire and educate data scientists worldwide, regardless of gender, and support everyone in the field. Follow
https://medium.com/women-in-data-science-taipei/%E8%B3%87%E6%96%99%E5%B7%A5%E7%A8%8B%E5%B8%AB%E4%BE%86%E9%96%8B%E8%AC%9B-%E5%9C%A8%E6%B1%82%E8%81%B7%E5%B8%82%E5%A0%B4%E6%88%91%E6%9C%89%E5%83%B9%E5%80%BC%E5%97%8E-%E5%A6%82%E4%BD%95%E6%AD%B8%E7%B4%8D%E7%B6%93%E9%A9%97-%E6%88%90%E5%8A%9F%E6%94%BB%E7%95%A5%E8%B7%A8%E5%9C%8B%E6%96%B0%E5%89%B5-85a1dee66ad0
['Taiwanese In Data Science']
2020-10-03 01:46:40.009000+00:00
['Overseas Jobs', 'Job Hunting', 'Interview', 'Taiwanese In Data Science', 'Data Engineering']
Trump Administration Expands Anti-LGBTQ+ License to Discriminate
With just over a month left in office, the Trump administration has made a last ditch effort to rollback nondiscrimination protections for LGBTQ+ Americans. The US Department of Labor’s Office of Federal Contract Compliance Programs quietly issued a final rule on Monday, allowing federal contractors to discriminate against LGBTQ+ employees under the guise of religious freedom. Initially proposed in 2019, the new rule expands religious exemptions to nondiscrimination protections for faith-based contractors and for-profit companies who may “prefer” employees of a certain religion, sexual orientation, or gender identity. Previously, the rule only applied to faith-based non-profit organizations. The new rule seeks to provide a “clearer interpretation” of the Civil Rights Act of 1964, which banned employment discrimination on the basis of race, color, religion, sex, or national origin. Former President Barack Obama signed an executive order in 2014, adding sexual orientation and gender identity to this list of protected classes. The rule also blatantly contradicts Bostock v. Clayton County, which effectively prohibits workplace discrimination for LGBTQ+ workers. In a summary of the final rule, the Department of Labor cited the Supreme Court cases Burwell v. Hobby Lobby and Masterpiece Cakeshop v. Colorado Civil Rights Commission, both of which allowed businesses to discriminate against employees and consumers under the guise of religious freedom. “Religious organizations should not have to fear that acceptance of a federal contract or subcontract will require them to abandon their religious character or identity,” Secretary of Labor Eugene Scalia said in a statement on Monday. Scalia is the son of the late Supreme Court Justice Antonin Scalia, who once compared gay people to murderers and animal abusers. LGBTQ+ advocacy groups have promptly condemned the new rule and chastised the Trump administration, highlighting the impact it could have on LGBTQ+ people, women, and religious minorities. “This action by the administration is blatantly offensive, unnecessary and simply unacceptable, which is further compounded by the fact that they are attempting to jam it through a lame duck session,” Alphonso David, the President of the Human Rights Campaign, said in a statement. “Since taking office, the administration has worked around the clock to dehumanize and demean LGBTQ+ people all while misrepresenting the law to justify creating a license to discriminate against people including on the basis of gender identity and sexual orientation.” With just 43 days left in office, Trump is spending his remaining time as president weaponizing religious freedom and using it as a license to discriminate against anyone who does not adhere to or abide by a certain standard of beliefs. After four years of rolling back basic rights and protections for LGBTQ+ people, this is just his administration’s latest and hopefully final attempt to make queer and trans people feel unwelcome and unsafe in the US. While this new rule is a clear attack on the LGBTQ+ community and other marginalized people and should be treated as the threat that it is, President-elect Joe Biden will likely repeal it shortly after his inauguration in January. Biden has already committed himself to being a “partner” to LGBTQ+ Americans, promising to sign the Equality Act in his first 100 days in office, which would essentially ban discrimination on the basis of sexual orientation and gender identity. And while there is no way to guarantee exactly what Biden will do once he takes office, for once there is hope.
https://medium.com/an-injustice/trump-administration-expands-anti-lgbtq-license-to-discriminate-a60b19a62531
['Catherine Caruso']
2020-12-10 01:11:14.551000+00:00
['Justice', 'Equality', 'Politics', 'LGBTQ', 'Society']
A Brief History of Money
A Brief History of Money Know where it was, see where it’s going. Photo by Karolina Grabowska from Pexels Historians believe that money was invented much earlier than writing, perhaps anywhere from 6 to 9 thousand years ago. Unsurprisingly, the early history of money is not well-documented. Before the invention of money, people in need of goods or services would enter into an agreement to trade, or barter. Bartering did not work very well because it was inefficient. If a person wanted apples and only had chickens to trade, he would have to find another person who not only had apples but also desired chicken. Even if those individuals happened to find each other, they would need to come to an agreement on how many apples were equal to one chicken, or vice versa. Coins, Banknotes, & Commodity Money Around 3 thousand years ago, civilizations started to select precious metals like gold and silver as their primary currency. Nonetheless, standard units of currency still did not exist. The divisibility of currency remained a problem until around the 7th-6th century BC when societies started to form precious metals into equal weight coins. Despite this development, the bulkiness and weight of coinage was deemed inconvenient. This led to the development of representative money such as banknotes, or paper money. Banknotes functioned as a claim check on the gold or silver held in a treasury and promised by a central bank. Banknotes were equivalent to gold because you could always convert them into gold. The use of gold as “commodity” money was not accidental. Gold has several unique qualities that make it a good store hold of wealth such as scarcity, durability, easy recognizability, and divisibility. Gold is also very labor intensive to locate, mine, refine, and mint. A significant amount of energy must be expended to generate a relatively small amount of additional gold. The Gold Standard and Bretton Woods Monetary Systems In the 19th and early part of the 20th century, many nations adopted a monetary system called the gold standard. In this system, the value of a nation’s currency, or paper money, was defined by a fixed quantity of gold. In 1944, with World War II coming to an end, the Allied Nations met at Bretton Woods New Hampshire to rebuild the international economic system. At that time, the United States held two-thirds of the world’s gold and was emerging as the most prosperous nation in the world. The outcome of this conference was the Bretton Woods monetary system, in which it was decided that the US dollar would be used as the world’s reserve currency. Under the Bretton Woods system, countries connected their currencies to the US dollar at some fixed rate, and the US dollar, itself, was backed by a fixed quantity of gold; 1 US dollar for 35 oz of gold. In this way, the world’s currencies were still backed by gold through the US dollar. In the beginning, the Bretton Woods system worked well. While the United States held the majority of the world’s gold the system appeared stable. However, a number of events caused the system to begin to fall apart. In the 1950s and 1960s, the economies of Germany and Japan began to recover while the relative economic output of the US declined. The United States was also in the midst of the Vietnam War and increasing monetary inflation. By 1966, foreign central banks held $14 billion while the US only had $13.2 billion in gold reserves. The dollar continued to drop relative to foreign currencies and nations continued to redeem their US dollars, with some exiting the Bretton Woods system altogether. The “Nixon Shock” and Fiat Money In August 1971, in response to increasing inflation, President Richard Nixon announced that foreign governments could no longer exchange their US dollars for gold. This unexpected event became known as the “Nixon Shock.” This historic announcement led to the creation of a new monetary system in which the currencies of the world were no longer backed by gold or any commodity of intrinsic value. This system became our current monetary system. Most modern paper currencies, such as the US dollar, the euro, etc… are government-issued and not backed by a commodity. They are known as fiat currencies. A basic underpinning of the fiat monetary system is that governments can declare that something of no value, such as a piece of paper, has value. In other words, the value of a fiat currency is determined by government promises and the peoples’ trust in those promises. The Future of Money To assess the merits of a currency, one must recognize that money has two primary functions. 1) Money serves as a medium of exchange and 2) it is a store hold of wealth. Fiat money certainly works as a medium of exchange, but many have questioned whether it is a good store hold of wealth. Since, in our present monetary system, money isn’t backed by anything of intrinsic value, central banks around the world have been able to print effectively unlimited amounts of money. Therefore, fiat money risks losing value either because central banks print too much of it, leading to inflation (or even hyperinflation), or because people lose trust in its value. Earlier this year, the billionaire investor Ray Dalio made headlines for exclaiming, “Cash is trash!” In essence, Dalio was warning about the tendency of people to underestimate the risk of holding cash, a fiat currency, as a store hold of wealth. Dalio pointed out that, throughout history, every currency has either ended or been devalued over long periods of time. It is almost impossible to exactly answer the question, “what is money?” because money is constantly changing and evolving over time. Some people think that the fiat system will persist, others believe we will return to the gold standard, and a small contingent think bitcoin is the next reserve currency. Only time will tell.
https://69411.medium.com/a-brief-history-money-b74ad13d8348
['Steven Yamada']
2020-10-15 17:30:45.106000+00:00
['History', 'Money', 'Economy', 'Society', 'Politics']
💸 What does Unsplash cost in 2019?
3 years ago, we wrote ‘What does Unsplash cost?’ to give a totally transparent look at the bills associated with hosting one of the largest photography sites in the world. Since then, Unsplash has continued to grow tremendously, now powering more image use than the major image media incumbents, Shutterstock, Getty, and Adobe, combined. With Unsplash’s public API, we power over 1000+ mainstream applications, including Medium, Trello, Squarespace, Tencent, Naver, Square, Adobe, and Dropbox. All of that growth means two things: more traffic and bigger bills. In the interest of transparency, Chris and I thought we were overdue for an update. It’s 2019. What does it cost to host Unsplash? Then Back in 2016, Unsplash had just crossed 1 billion images viewed and 5.5M photos downloaded per month. Our team was smaller and our product was a lot less developed, which led to less services and less in-house processing. We had one main application, a traditional Rails monolith, that consumed a handful of services to create the basic Unsplash experience. Heavy features like search and realtime photo stats were in their infancy, which led to much simpler data processing requirements and the use of 3rd party services like Keen and a handful of CRON jobs. The final monthly breakdown for April 2016 was: Web Servers: $2,731.23 Monitoring: $630.00 Data Processing: $1,000.00 Image Hosting: $11,170.00 Other: $2,127.39 Total (USD): $17,658.62 Now A lot has changed. For one, Unsplash is a hell of a lot bigger. 10+ times bigger. We now get more traffic from our API partners than our own website and official apps, despite these growing significantly. Partnering with some of the largest consumer facing apps in the world has pushed our engineering team to match their practices around redundancy, monitoring, and availability, which requires more supporting resources and services. Our product team has continued to push the envelope for core features like search and contributor stats, requiring more and more data to be processed in greater and greater volumes. All of these things have pushed our architecture to be more complex, while also increasing the baseline costs. Web servers Total monthly cost: $29,763 We continue to use Heroku as our main web platform. Despite its premium cost over AWS, Azure, and Google Cloud, Heroku’s built-in deployment and configuration tools allow our team to move faster, more confidently, and more reliably. As we’ve detailed previously, the alternatives would undoubtably be cheaper on paper. But in reality, the increased simplicity and freedom offered by Heroku for a small, product-focused team is a major cost savings advantage. In addition to our main web servers and databases using Heroku, we use Fastly for distributed CDN caching, Elastic Cloud for our Elasticsearch clusters, and Stream for our feed and notification architecture. Web Server costs breakdown for February 2019 Monitoring Total monthly cost: $7,679 Our team is small for Unsplash’s size, with our total product team counting in at just 11 people. With no one dedicated to dev-ops, ensuring Unsplash is running smoothly and never goes down, requires a lot of instrumentation and reporting. Despite the volume of metrics we monitor and report on, New Relic, Sentry, and Datadog remain fairly inexpensive solutions. Our logging is certainly our largest monitoring expense, but the detailed information is crucial when debugging issues or rolling out new features. Data Processing Total monthly cost: $15,223 Data processing has been the area with the largest relative increase since 2016. Back then, analytics and data were an afterthought in our development process. We relied on tools like Google Analytics for user analytics and Keen for product metrics like photo views and downloads. Since then, we’ve needed to expand our data collection, aggregation, and reporting significantly, both from a product and a company perspective. As Unsplash has grown, the volume has also increased considerably, with hundreds of millions of events tracked every day. We’ve replaced Google Analytics and Keen with an open-source data pipeline, Snowplow Analytics. Snowplow takes care of the data collection and formatting, allowing Tim, our data engineer, to focus on data aggregation, modelling, and visualization. We’ve also expanded the role of the data architecture in the product to handle all of our machine learning and search processing. As we go forward, we expect this to continue to be the biggest area of expansion. Data processing costs breakdown for February 2019 Image Hosting Total monthly cost: $42,408 Imgix is our single biggest expense, but we love it. Yes there are cheaper options, but trust us when we say that they aren’t as good for what we do. We send petabytes of data through Imgix’s CDN and render more than 250 million variations of our source images every month. Their reliability, performance, and flexibility is unmatched, and negotiating our contract through them actually allows us to discount our CDN costs due to their bulk negotiations with CDN providers.
https://medium.com/unsplash/what-does-unsplash-cost-in-2019-f499620a14d0
['Luke Chesser']
2019-04-08 15:50:22.356000+00:00
['Photography', 'Articles', 'Technology', 'Software Development', 'Startup']
How to Structure a Detox Day
Daily detox routine Morning Oil pull This is an Ayurvedic technique I have recently been experimenting with. It’s not as awful as I expected but I also wouldn’t say that it’s pleasant. Essentially, oil pulling is when you put a tablespoon of oil in your mouth, usually organic coconut oil, and swish it around for 10–15 minutes before spitting it out (not down the drain, as the oil will clog it). What this does is remove the fat-soluble toxins and bacteria from your mouth. This not only helps to clean your teeth and gums for overall improved oral health but has also been shown to help with inflammation in the body. Implementation: Do this on an empty stomach, first thing in the morning. Grab a spoonful of coconut oil, put it in your mouth, let it melt, and then swish it around. Hydrate Hydration is very important in the morning. If you don’t do anything else when you wake up, at least hydrate. Your body will be very dehydrated from breathing and sweating during the night, especially since you likely haven’t had a drop of water for at least 7 hours. I always keep a bottle of water on my bedside table and try to drink the whole thing as soon as I wake up (or after my oil pulling, if I’m doing that). You can add a pinch of pink Himalayan salt and a couple of teaspoons of lemon juice to make the water more alkaline, which can counteract a state of acidity that our bodies are often in, as well as to replace the precious minerals we have lost overnight. Drinking water as soon as you wake up will rehydrate the body and start to flush toxins out. Implementation: Drink a bottle of water, or at least a tall glass of water, as close to waking as you can. Switch to decaf Caffeine mimics adenosine, the neurotransmitter in the brain that makes us feel tired. Caffeine binds to the adenosine receptors, therefore blocking the adenosine from binding to the receptors, which is how caffeine works to make us feel awake. However, continued caffeine use causes our brain to create more receptors, which is why it takes more and more amounts of caffeine to achieve the same buzz. This also means that when we don’t have caffeine, or when the caffeine runs its course, there are a lot of receptors for the fatigue-inducing adenosine to bind to and we experience a huge crash in energy. I love coffee, so I’m not willing to give it up altogether. People talk all the time about quitting caffeine, and I’m sure there are amazing benefits to this. But coffee and tea also have amazing health benefits and research has consistently linked coffee consumption to increased longevity. It tastes good, and I like the focus and energy it gives me. But you need to be aware of what it does to your body. A one week detox from caffeine will allow your brain to get rid of some of the adenosine receptors in your brain, by the use it or lose it mechanism. If you want to learn more about the health benefits of coffee, and the healthiest way to brew your coffee, see these articles: Implementation: Find yourself a bag of decaffeinated coffee beans and use these instead of your regular coffee beans. Photo by Nathan Dumlao on Unsplash Intermittent fasting I never used to fast, and all the times I tried it I could feel my sympathetic nervous system being stimulated. My body was stressed, and I didn’t react well to it. But over time I have slowly (and accidentally) pushed my breakfast time further and further back. I just find I have more time in the morning to do deep work if I don’t eat. And now I feel great not eating for 14–16 hours, and barely think about it. For some people, fasting is stressful on the body. I don’t recommend fasting if you’ve had a bad sleep, have a stressful day ahead, or are feeling particularly hungry. You don’t want to stress your body more. Eat if you’re hungry. But going for extended periods of time without food gives the digestive system a break, and induces cellular autophagy. This is essentially a cleaning out process where the body gets rid of old cells and cellular waste products. Similarly, during the day take breaks from eating. Try not to snack between meals. It will have benefits for your gut as well as your circadian rhythm: “By giving your body a break from food for five or six hours, you are allowing your digestive track to rest and reset its circadian rhythm. This is an important step in healing the brain-gut axis.” — Dr. Suhas Kshirsagar There are some considerations to be aware of when attempting to fast, especially for women. More about this can be found here: Implementation: For this practice, the most commonly used strategy is to eat dinner as usual, and not eat again for another 14–16 hours. So, if you have dinner at 7pm, then don’t eat until at least 9am the next day. You can structure this in any way you want however, for example by eating breakfast but then skipping dinner. Rebounding I always thought this was a crazy and odd thing to do. But there are so many benefits to jumping on a mini trampoline, especially first thing in the morning. It gets your heart rate up and burns more calories than running, and also stimulates your lymph system by increasing lymph flow and drainage. This will improve immunity, improve fitness levels, as well as increasing blood flow and boosting circulation. Here is a list of 197 other benefits of rebounding. Implementation: Find yourself a mini trampoline, a regular trampoline, or even a skipping rope. Jump for about 5 minutes each morning. Avoid deodorant and other artificial body products During this detox week, I try to avoid things like deodorant and face moisturizers as much as I can. I’ll use organic coconut oil for a moisturizer, and I’ll either not use deodorant (which can become impractical), or I’ll switch to either a natural one or just some drops of essential oils such as tea tree oil. A lot of the toxins we absorb come from these products. Anything you put on your skin is going to end up in your body, so a good recommendation is to not put anything on your skin that you wouldn’t eat. Thankfully, many companies now make such products. They can be expensive, so if you want something cheaper, you can also Google all sorts of recipes for how to make your own. Implementation: Try to avoid putting anything artificial on your skin — deodorant, moisturizer, perfume, etc. Find natural alternatives, or go without for the week.
https://medium.com/live-your-life-on-purpose/how-to-structure-a-detox-day-965e3a5701a0
['Ashley Richmond']
2020-12-17 18:02:42.058000+00:00
['Self Improvement', 'Lifestyle', 'Health', 'Advice', 'Routine']
Custom App Development Vs. Off-The-Shelf Solutions (Business Edition)
Custom App Development Vs. Off-The-Shelf Solutions (Business Edition) I’ve studied all the alternatives to custom apps and that’s what I’ve learned. Every second person on earth has a smartphone. According to rough estimates, there are already about 4 billion smartphone users worldwide. Hundreds of billions of dollars are spent annually on mobile applications. And there is a reason for that. People are just comfortable using smartphones and on top of that, they like to use mobile applications. Well, we see eye to eye on this issue. A smartphone is always close at hand, all day long I can have an ingenious and multifunctional device at my disposal, and most importantly, virtually all the necessary services are available at the tap of a finger. Oh, I wish it were like this! Alas, things are a little different. More than 2 million applications are available for download in the App Store, and almost 3 million in the Google Play Store. The numbers are growing every day and this fact is totally cool. But this option is not suitable for everyone. First of all, the problem arises when it comes to business Therefore, I decided to figure out whether it is worth investing in the development of a custom application, or whether a modern business can find an alternative. Before moving on to mobile applications, let’s take a look at how things are with web services. This option undoubtedly has the right to exist, however, such solutions are often sold to companies on a subscription basis, and with the expiration of its validity, you could lose some information and previously established workflow might be gone as well. In addition, the use of web services requires an Internet connection, which in some situations is not always possible or convenient. It is also worth mentioning that this solution may be suitable only for those tasks that were originally foreseen by the developers. Therefore, for specific tasks, a cloud-based web service is not the best choice. Another option is to install a paid or free app from the store. Such solutions are already much more functional than web applications since they can usually work offline and have more functionality due to their native nature. However, they are not free from shortcomings just like web services. You are still dependent on the company that provides you with services. You cannot be sure for certain about the safety of your data. And while the features of the application may be great, you still would not get an ultimate app because it is designed to suit the most average user. And sooner or later it all boils down to the fact that seemingly ideal alternatives are far from ideal almost in each and every aspect. Here are the main disadvantages: a rare account of the specifics of the business, no fine-tuning for each individual client, lack of scalability. However, it is foolish to deny that they have some advantages. As a compelling argument in favor of off-the-shelf solutions, I want to point out the following: low cost, do not require time to be developed, are updated by the service provider which also delivers all the technical support. What are the bright sides of custom mobile apps for business? The most important and indisputable advantage of custom mobile apps is the specialness and fine-tuning of absolutely every aspect of the application. Choosing custom development, you make it in favor of a unique product that is created only for you and directly with your participation. You get a tool that belongs only to you and will help optimize a whole bunch of business processes in your company. You also get complete freedom in terms of design and functionality of the final product. Custom apps are truly scalable — you can get any upgrade you want as soon as the need arises. In addition, a custom application is able to provide you with 100% security of your internal data. However, there are aspects of this option that you should be aware of: cost and development time. Undoubtedly, any mobile application is an important step and a serious investment for any business. Since software engineering is a complex process, a large number of specialists are involved in it. Planning, development, and testing can take months, but the result is a flawless performance. An accurate project estimate will help you stay on budget and still get everything you were craving for. In addition, every dollar invested in a project is a dollar invested exclusively in your business. Drawing Conclusion Summing up, I want to note that both developing a custom mobile app and using off-the-shelf solutions are two options that have the right to exist. Off-the-shelf solutions are ideal for small businesses that are not ready to invest in development. The market of such solutions is full of programs that allow you to solve a raft of generic tasks for free or at a low price and can help to establish basic business processes. However, you should not expect more from them. A custom mobile application is a choice for those businesses that are ready to move to another stage. By investing in custom development, you get a perfectly tailored application that is also a powerful tool in the hands of the company. If you feel like you’re hitting the ceiling, then a custom mobile app can help you break through it. For sure, this is the thing that will not help you tumble over yourself, but will open up completely new possibilities for optimization of your workflow. In my opinion, the whole picture is perfectly described by the phrase “you get what you pay for”. The off-the-shelf app is not a bad option, but, as usual, the devil is in the details. If you need a perfectly balanced and neat solution, then a custom mobile app is your way. Please subscribe if you like this format. If you have any questions or comments on the guide, let us know!
https://medium.com/fively/custom-app-development-vs-off-the-shelf-solutions-business-edition-2532947d8b28
['Vsevolod Ulyanovich']
2020-09-15 09:11:19.403000+00:00
['Business Strategy', 'Business Development', 'Software Development', 'Mobile App Development', 'Investment']
Go Functions (Part 3) — Variadic Functions
Photo by Anthony on Unsplash Variadic functions are functions that are flexible about the number of arguments you can pass into it. This article is part of the Functions in Go series Variadic functions are defined using the 3-dot-notation, ... , here’s an example of this, where shoppingList ( line 8) is a variadic function. Line 21: We call the shoppingList function with 3 string arguments. Line 8: The ...string indicates that “items” is a variadic input parameter. This tells Go to capture all the arguments into a string slice variable called “items”. Lines 10-11: This just confirms that items are indeed a string slice variable that housing all the arguments. Lines 13–15: The for-loop iterates through the slice and prints out each slice entry.
https://sher-chowdhury.medium.com/go-functions-part-3-variadic-functions-f37734b8963
['Sher Chowdhury']
2020-12-30 15:10:25.661000+00:00
['Variadic Functions', 'Google', 'Go Programming', 'Golang']
Lessons From China’s COVID-19 Visualizations
Lessons From China’s COVID-19 Visualizations And interview with three data viz practitioners who have responded to the urgent need for communicating information about the coronavirus pandemic in China The threat of the coronavirus has frightened people around the world. Disentangling the real and imagined from the threat can be especially difficult. In late February the general director of the WHO warned “it is impossible to predict which direction this epidemic will take.” Getting information to people about the state of the epidemic is especially important, yet often poorly done. We face what the WHO has termed an “infodemic,” where rumors and misinformation confuse citizens. People looking to understand real threats are often faced with inadequate visualizations. Responding to this need in China, visualizers in traditional media and academia responded. Three different approaches to this challenge can be seen in the works of the following three visualization practitioners, who I interviewed for this story. Xiaoru Yuan (XR) is a professor at Peking University and leads the Peking University Visual Analytics group, which created many COVID-19 visualizations. Huang Zhiming (HZ) is the CEO of Data-viz.cn whose coronavirus visualizations in mass media have received over 1 billion views. Xiang Fan (XF) is a designer, digital media researcher, and professor at Tsinghua University. Her COVID-19 work gained a strong following. Kevin Maher (KM): How did each of you begin to get involved with visualizing the coronavirus for the public? What was a challenge you faced? XR: Our team has over 10 years of experience in visualization, and since I started the visualization research at Peking University in 2008, we have often worked on socially relevant problems. Much of our visualization for the epidemic was for an expert audience. For example, we developed visualization for experts that study patient medical records in order to find out what medical treatment or prevention methods are most effective. In our work made for civilians, we put scientific insights into our visualizations in an understandable way. Fig 1 The novel coronavirus compared with other historical viruses. Interactive While talking to professors of public health and medical science, we learned that the current epidemic has several special characteristics, which are unlike other infectious diseases. We decided we wanted to do further comparison. In medical literature we saw COVID-19 compared to other diseases in static graphics, but they compared using R0 [basic reproduction number], which is difficult to obtain an accurate measure of because the information needed to calculate it is quite incomplete. After attempting different metrics, we found a dynamic case fatality rate to be more appropriate. This visual (Fig 1) can give an overall picture for people to be more aware of the nature of the virus in comparison to other viruses, as well as it’s dynamic fatality rate, infection number and mortality number. Fig 2 A detail of the Tencent News Pandemic Dashboard, one of the most popular mobile dashboards in China. Mobile version HZ: My media company has worked on designing and developing visualizations for the largest media outlets. At the beginning of the crisis we worked on the visualizations for Tencent News, Baidu, and central governmental news providers. Toward the beginning of the crisis, when people were most concerned, our visualizations received the largest amount of attention. Our dashboard for Tencent news (Fig 2) received over 100 million views per day during this time. One of the most difficult challenges in our work was to fact-check the data sources against each other, which required an immense amount of effort toward the beginning of the crisis, but the reliability of the data was most critical. Fig 3 View of the coronavirus data with circle packing. Interactive version XF: Nobody was aware of a mysterious virus spreading across the country when the Chinese New Year was approaching. When human-to-human transmission was confirmed, all of us quarantined ourselves at home and eagerly searched for more information about the epidemic. We knew that, given the unknown and unpredictable nature of the virus, we weren’t the only ones stuck in this bind. However, as visualizers, we could be the earliest info providers using data. One challenge to our work was that at first there were no unified databases for us to use, so we had to pool data from various sources such as the reports by provinces. FlowCases is a data visualization platform for people dealing with the coronavirus epidemic. It provides two rare perspectives, different from the mainstream media. One focuses on space and hierarchy of the data (fig 3) and another on the time distribution of confirmed cases (fig 4). KM: There was a kind of “infodemic” where misinformation and rumors in social media confused citizens. How do you see the role of your visualizations for the coronavirus? HZ: Traditional news media had a great impact by finding ways of communicating accurate information to counteract rumor. There were so many different kinds of information sent out by social media, some true, some false. Our job was to spread the most authoritative and reliable information available. As far as the visualizations, our work is different from the works of academics or small market visualizers. My analysis of why you can see in the figure below. When doing visualization the highest cost and most difficult visualizations are complex and cool. Fig 5 Analysis by Huang Zhimin as to why mass media have simple graphics. For our work in mass media, the ability of a visualization to spread is important. The more complex visualizations have trouble spreading because they are more difficult to understand. The more cool visualizations are more attractive, and spreadable (fig 3). So when we take both cost and the ease of spread into consideration, our team focuses on simple yet cool visuals. For example, Xiang Fan usually creates much more complex and cool visualizations that are difficult to create. Our work is more simple since it is commercial and needs to prioritize costs and spreadability, so it is either cool and simple or simple and artless. XF: Like the medical science community, the visualization community was faced with an unknown, rapidly developing severe contagion, that nobody, no matter whether they are part of the mainstream media or civic media, had any experience in communicating. At the beginning of the epidemic we saw our role as presenting information in an effective way that was not tried by the mainstream media. In much of the mainstream media we saw a restriction of design possibilities to only basic charts and maps (fig 6). These graphics left important questions unanswered, for example, the ability to see trends in how new cases are developing in provinces. Fig 6 A survey of existing coronavirus visualization platforms and works in China. Many of the large scale platforms have similar designs (clustered on the left). Source: Fei Luo, Beijing Institute of Technology. Our project aimed to allow people to closely observe the cities where they and their loved ones were. We keep checking reactions to our work to see if they can clearly communicate information to the public. Our coronavirus works have gotten more than 1.5 millions views from mobile users. However, no mainstream media source used or promoted our visualization. Both the mainstream media and the civic visualizers clearly understand different visualization approaches can lead to different cognitive outcomes about the epidemic. Before any institution can decide if a novel kind of visualization is “safe” to the mainstream media, rejection of the project is the most safe thing to do. We try to show a different perspective of the data people don’t see in mainstream platforms. XR: We want people to look at our visualization and find out the answers to the questions they are asking themselves. In the research community, goals of visualization are to be effective, and also be novel. Our method of accomplishing this is to first check the “design requirements.” We check literature and interview experts to find and define tasks that are critically important. We then try to tell a story that delivers the information to the receiver. Fig 7 COVID-19 Barometer showing the new cases in China. Interactive version For the coronavirus, we needed to identify what information the users want to understand, what they are searching for. One characteristic of the data on the coronavirus is that the overall confirmed cases were highly concentrated in the region of Wuhan at the beginning. However, for most people who see the visualization, they may also want to know how the data applies to regions closer to them. For the COVID-19 Barometer (fig 7) we researched users to find they were interested not only in the overall accumulated infections, but also how their region was affected. They also want to know how the infections are changing over time, especially closer to the present. They want to know at what critical moment the situation will get better. Our visual encoding method and interaction design supports these tasks. With color encoding the trend of new confirmed cases of each region everyday comparing with that of the previous day, viewer can clearly find out how the situation developing. While in the research community we look for novel visualizations, sometimes the beginnings of effective solutions can be found in previous work. Our team has experience in visualization for many years. We can adapt to new circumstances when something critical needs to be visualized rapidly. Fig 8 Visualization showing radiation levels within a fictitious city. Last year we developed a visualization for an expert audience regarding radiation levels in a city (fig 8). The work won an outstanding award in IEEE VAST Challenge 2019. Some aspects of the design requirements were similar to the situation we were facing now. We decided there were aspects of the layout that could successfully deliver information to the audience. KM: What would you like to tell the visualization community outside of China? XR: We would welcome talented visualizers to work with our team. We have over 30 faculty members and 150 students who have participated in our work. They are from a variety of backgrounds inside China and outside China. We work with people with a variety of different expertise, including people in data management, data mining, subject matter experts, visualization research, and art and design. We also have worked with top news agencies and journalists so that our work can reach a wider audience. We have worked on developing COVID-19 visualizations for specific audiences and applications, such for the African public (Fig 9). A sample of our work can be seen here. Fig 9 COVID-19 in Africa. Interactive version This COVID-19 epidemic is a war between all human beings and the virus. In our role as visualizers, we can harness complex information and equip people with better situation awareness. We have an open call for professionals to work with us, welcome to join. With our expertise and by working together we can help society. HZ:My advice to visualizers is to consider several fundamental issues before you begin visualizing. Is the most important goal of the visualization to spread, or to give people the ability to analyze data? How much expert knowledge does the user have? What device would be best for the visualization, desktop or mobile? Once these considerations are made we can better create more appropriate visualizations. XF: People around the world have endless questions related to the epidemic, and many of them can be answered though data. Visualization not only has a role in explaining data, but also in answering people’s questions by allowing them to explore the data themselves. No one knows the extent of how the epidemic will spread or the full impact of its spread. Certainly, no one knows the most effective approach to visualizing the many aspects of the data. As visualizers, let’s try our best to think through the many ways we can give correct answers to the many questions people have. Final thoughts There is a phrase by Confucius, 三人行,必有我师, which translates as “if three of us walk together, at least one of the other two is good enough to be my teacher.” In modern times it is used to mean we can learn those with different perspectives. Outside of strong ties in academia, much of the Chinese data visualization community lives apart from the rest of the world. Only seven of the first 3,500 DVS members were from the Chinese mainland. In my experience at the Chinese Academy of Sciences and in starting the Student’s Data Visualization Society at Tsinghua University, I often see coworkers and classmates sharing insightful Chinese articles and Chinese visualization projects that are not published in English. Conversely, many resources from abroad are not available in China. It seems like there is something that the communities could learn from each other. In late January Xiaoru Yuan, Xiang Fan, and Huang Zhimin used data visualization to respond to the urgent need for communicating information about the epidemic to the public. I find their three different approaches to visualization each to have its own strengths and undeniable impact. Outside of the visualization community, I saw their works shared on social media as people spread early news of the epidemic situation. For me, the most important lesson from the China’s epidemic visualization is the opportunity for visualizers to learn from each other. The importance of learning from China’s visualization community is amplified since China has already passed through much of what the rest of the world is now experiencing. We have the opportunity to learn from what worked and what didn’t. In these times of living apart we can be better connected and learn from each other. This interview was conducted by Kevin Maher, a freelance visualizer pursuing graduate study at Tsinghua University and assisting research at the Chinese Academy of Sciences. You can read more of Nightingale’s coverage of the coronavirus pandemic here.
https://medium.com/nightingale/lessons-from-chinas-epidemic-visualization-f3b25c136d51
['Kevin Maher']
2020-04-09 13:31:01.026000+00:00
['Design', 'China', 'Covid 19', 'Programming', 'Data Visualization']
Have You Thought About Making Money Selling T-Shirts Online?
How would you like to make $100,000 in just five months selling t-shirts online? Benny Hsu did it and there is a chance you can too. Don’t get me wrong, it’s not easy at all, but hey, it’s 100K! How would you also like to have a business where you can have fun? Making people happy with your t-shirt designs is a blast! How about working part-time from anywhere in the world? Once you get your store set up, you can. The first thing you will probably ask is, “if this business is so great, why isn’t he doing it?” It’s a valid question for which I have an easy answer. I am a writer. I love to write. Writing, taking care of the business of writing, and building a brand takes all the little bit of time I have. I came up with an idea for a dropshipping business selling t-shirts with a “beard culture” theme called Remarkable Gentleman. I set it up and made a few sales to see if it was a valid idea and I could build a story around it. It was and I could. Source: Remarkable Gentleman. Image by the author. I still run it in the background, but I’m not pushing it. Writing takes most of my time, but when things settle a bit, I’m going to put up a few more designs, run some Facebook ads, and make some profit. This business will be good for you if you have about 20 hours a week to spare. I set it up already once, and since I am a writer, I documented the whole creation process so I could share it in the future. I bet you can’t wait! Let’s get started. Getting set up There are some important things you should do in the beginning, like getting a business entity set up and choosing a business name, but we won’t talk about that here because the subject is covered in detail plenty of places on the web and all you have to do is Google it. There are a few things I can help you with though. Pick a niche What topic, or subject will your t-shirts cover? What will you focus on? The more specific you are, the better. For example, maybe you want t-shirts for golf pros or coffee lovers. Maybe you are, at heart, a Game of Thrones fan and you want to sell shirts about the show. Pick something you love, because you will be working on your store every day. Don’t pick something you will get tired of in a week. Your niche will be one of the first things you pick, because it will most likely decide your business name, logo, t-shirt designs, colors, and just about everything else. Choose wisely, but I will say — most people know what niche they want right away! Where will I sell my t-shirts? This is a point where you must make some more tough choices. You could choose one of the larger platforms, like Shopify or WordPress, design a store, set up payments, set up your print-on-demand plugins (Printful), and start selling t-shirts. Or, you can do what I did and use Teespring. Teespring is great because you don’t have set up a website — all you do is enter a bit of information, add your designs, and start selling. And if you want to expand, you can even sell other items like coffee mugs and iPhone cases with your designs on them. Now all you need is the designs. Choosing t-shirt designs The great thing about this important step is that you don’t have to be a designer and do it yourself. It’s an easy thing to commission them on 99Designs or Fiverr.com. Make rough drawings and choose a color palette and let the designers do the rest. I picked a person who advertised t-shirt designs on Fiverr, because when your order is delivered, you should have the design proof and t-shirt mockups. These are the actual files with your design that will go on the t-shirts and sample pictures of what the t-shirts will look like when shipped. You will put these designs in your Teespring store software. Social Media Setup We will go into more detail about social media later, but there are a few things you need to know to launch your store. You need to set up your social media accounts on your chosen platforms, so you have the usernames available to input in your store. How do you know which social platforms are the best? You should have accounts on Instagram, Facebook, or Pinterest as they are very visual and great for t-shirts. There may be others you also use like Twitter, but it’s up to you. Now may also be a good time to start getting comfortable with your platforms of choice because you will be working with them every day. The best places to start if you don’t know anything are tutorials on Teespring and a good old Google search. This time would be good to set up the branding elements for your social media accounts like covers and profile photos. If you don’t know how to create them, Fiverr is a cheap place to get them done. Photo by NordWood Themes on Unsplash Do you need a business plan? If you are an MBA and think you need a business plan, feel free to write one. If you don’t feel the need, write down the answers to these questions and keep them for reference: What is your company name? What is your company tagline (if you have one)? What is your niche? Who is your target audience, or how does your ideal client look? What is your USP (what makes you unique?)? How much do you charge? What is your monthly revenue goal? How many new or repeat customers do you need to achieve this goal? How do you get new customers? Who makes up your team? How will you measure success (number of customers, monthly revenue, etc.)? Sometimes it’s good to get it down in writing, even if you must guess the answers in the beginning. Adding T-Shirts to Your Store After your store is set up, you will want to add t-shirt designs. The process is quite easy. You’ve already commissioned the designs, so all you have to do is load the image files into the Teespring application software. This process ensures the t-shirts are printed properly. Teespring will generate mockups for you in any color you want — and all you need to do is create descriptions. Descriptions Product descriptions are one of the most important things you need to get right to guarantee you get sales. Here some tips to make the most out of your descriptions: Be specific — What kind of fabric is used in the t-shirts? How does it feel to wear them? Do you want to include a size guide to help your customers get the right fit and entice them to press the buy button? — What kind of fabric is used in the t-shirts? How does it feel to wear them? Do you want to include a size guide to help your customers get the right fit and entice them to press the button? Be creative — This is where you can shine. Don’t use the same wording as everyone else. Try to be different, edgy, or funny! — This is where you can shine. Don’t use the same wording as everyone else. Try to be different, edgy, or funny! Don’t forget your audience – Remember who the people are that will be attracted to your t-shirts. How do they talk? Is there specific jargon they use that no one else does? How can you get their attention? – Remember who the people are that will be attracted to your t-shirts. How do they talk? Is there specific jargon they use that no one else does? How can you get their attention? Stories are best — People love a good story. If you can somehow create a story about your products, people will respond! If you carry the theme throughout your store, people will enjoy what they see, stay long, and above all, BUY! — People love a good story. If you can somehow create a story about your products, people will respond! If you carry the theme throughout your store, people will enjoy what they see, stay long, and above all, BUY! Keywords — The use of descriptive keywords will make it easier for people to find you on the search engines, like Google. Phrases like “black Star Wars Darth Vader t-shirt” are closer to what people will search for on Google — even more so than the word “t-shirt.” If you are a writer, this is a good time to spin a few stories, if not, Fiverr has plenty of people who will do it for you. Once you load the descriptions in your store application and the rest of the set-up is complete, you are ready to launch your store. This is where the fun begins! Integrations One thing I haven’t done is to add integrations and take my store everywhere on the web. You can set up on Etsy, Amazon, and eBay. You can integrate with Google, Pinterest, and Facebook and run ads from each. There is so much to do to generate sales. I’ve only scratched the surface. I plan to push my store further than before with all these connections, and so can you! Teespring explains how it’s done on their website. How to Make Your First Sale After you push the button to launch your store, you can’t let it sit idle and expect it will get sales. You’ve got to get the word out, and the best first place to start is your personal network. Do you have some Facebook or Twitter friends? Can you send out an email to people you know and ask for their opinion on your t-shirts? Post first to your social media accounts and direct them to your store. Add a picture of one of your shirts to the post because people respond better to visual content on social media. Pictures are memorable! Where Do My Customers Hang Out? The next step is to find out where your customers congregate. If you picked a niche that is near and dear to your heart, you are probably already connected in many ways to your target audience. Facebook groups, forums, and blogs are all places you can find people interested in your products. But, a word of warning: Don’t just jump on someone else’s blog and start sending out your store URL in the comments. People do not appreciate spamming! When you post in the Facebook groups or web forums, get to know everyone you can. In your signature or at the bottom of the post, you can add a link to your store or social media accounts without breaking any rules in most cases. As a last tip, try Reddit. There are many subreddits within, and you are bound to find one that would have many potential customers. Don’t spam — they hate that! You can post a t-shirt picture or a product photo, and your link will be in your profile. It is a great place to get feedback on your designs! Don’t be blatant! But, if you see an opportunity, take it! Giveaways or Deals Come up with a great giveaway or a special deal on your shirts and post it on social media. People love free and discounted stuff! There are some great resources on the internet with ideas for promotions to run. Be creative! It’s simple to set up a discount or giveaway in the Teespring software. Photo by Don Agnello on Unsplash Social Media Marketing — The Basics You will find out social media marketing will generate most of your customers. Be careful about which platforms you chose and do a thorough job on each. It’s better to have three platforms on which you do an amazing job interacting with your customers than to have many and do a poor job on all. For a t-shirt shop, Instagram, Facebook, or Pinterest will work best for you, followed by Twitter. Also, depending on your niche, one may work better than the others, or you may find a platform that works better than these. You will find out what works and what doesn’t. But, some tactics always work no matter what. Image posts always perform better and get more interaction than text-only. You should be putting your professional mockups and photos out there for your audience to see. Encourage your customers to submit photos of themselves wearing your merchandise and have your own branded #hashtag. Use hashtags to encourage different groups to interact with you. Use popular hashtags or make up your own! Find out who your competition is and follow them. Find out who your competition follows and do the same. Find people and influencers in your niche and ask them for an opinion on your products. If they like what they see, mention them in future posts. Use Google Alerts to find out what people in your niche are talking about and when. Alerts can also give you ideas for future designs. Make sure the design on all your social media sites is consistent. A good design will go a long way towards building your brand and creating buzz around your store. You should also keep the tone of your posts consistent. You don’t want to be folksy one day, and edgy the next. Find out which times in a day get the most engagement. Weekdays usually are best, but finding the specific time when your customers are active is key. Remember, be consistent in tone and voice across your platforms. Each platform will have its own best practices. Instagram The only downside to Instagram is that you can’t put outbound clickable links in your posts, so you must be creative by finding ways for people to interact with you. You can use Linktree and a link in your profile, which is better than nothing. Remember that Instagram is a visual platform, so professional photos and short videos are your bread and butter. It’s best not to post product photos with a white background. Be creative and find ways to display your products that are fun and interesting and have a story. Encourage your audience to mention you or use your hashtag with their photos of your merchandise. If you do use filters, try to use the ones that emphasize warmth (red or yellow) because you will get much more engagement. Use the stories feature to tell stories about your products or make temporary special offers or promotions. Pinterest On Pinterest, 2 million people pin products every day. It is a huge platform for pinnable and visual content. This platform will drive a huge amount of traffic for you. Here are some best practices: You can add a pin button to each of your products, so it’s easy for your audience to share them. Like Instagram, product images with a white background don’t work as well as action shots of your merchandise. Images perform better when they are light, tall, and don’t include shots of people. Don’t only promote or pin your own products. Create boards with awesome or funny t-shirts you find on the platform. Facebook Facebook the biggest player, but the amount of organic traffic and views to your posts is limited. Facebook will be powerful for you because you can boost posts and run targeted ads. Again, image posts work the best. Make sure you have a business page. Don’t use your personal account. Make sure you fill up your about section with keywords and valuable information so your page is found easier. When you start getting some sales, start thinking about running paid ads. Because of all the information Facebook collects, it’s easier to get a more targeted group getting your ads. Twitter Twitter is a great platform, even though the life of your tweets is very short. You must find out the times when your audience is on Twitter and time them accordingly. Again, use images wherever possible. Hashtags are key on Twitter, so make sure you use them effectively. Follow your competitors and find out who they are following and follow them too. Many times, you will get a follow back! Photo by Brooke Lark on Unsplash Final Words We covered a lot of ground and by no means am I implying I covered everything. I could have one 2000-word post on integrations alone! Do some research. Scour the Teespring website and Google and look for information on the business of running a t-shirt store. Pay close attention to Facebook ads, because they are so targeted that you can reach the exact people who are looking for you. Learn something new about your store and business every day. Try new things. Always be on the lookout for new designs and cool products. Above all, have fun. Before long, you will make money, so don’t worry. When you do, share your success with others and show them how to start a t-shirt business.
https://jasonjamesweiland.medium.com/have-you-thought-about-making-money-selling-t-shirts-online-a3879d3a0c4e
['Jason Weiland']
2020-10-16 18:37:03.812000+00:00
['Tshirts', 'Entrepreneurship', 'Business', 'Ecommerce', 'Social Media']
The Clarity Trap: How to Keep Creating Despite Uncertainty
The Clarity Trap: How to Keep Creating Despite Uncertainty 3 Things to Remember Photo by Andreas Brücker on Unsplash I have a friend, who writes beautifully. But what happens to him happens to many of us artists before embarking on a project. He falls in love with the idea yet feels the enormity of it. Therefore, he wrestles with questions — How do I start? How much time will this take? Then, after several days of inconsistent obsession and no answers, he convinces himself the project is not worth pursuing. At least, not until he receives full clarity and direction. And this is all before he even completes the first draft. Life can feel like an unending maze. And doubt and uncertainty can be dizzying. But how do we find direction? How do we know that what we’re doing is right, for sure (for sure)? Along my creative journey, I’ve found helpful practices that have helped me endure the fog of uncertainty. Here are 3 tips to remember the next time you’re waiting for clarity.
https://medium.com/illumination/the-clarity-trap-how-to-keep-creating-despite-uncertainty-a0cc09b16709
['Brandon B. Keith']
2020-08-18 05:52:14.336000+00:00
['Advice', 'Writing', 'Self Improvement', 'Self', 'Art']
Post-Pandemic Education is the Reality For Millions of Girls Today
In the world right now, there are 750 million adults who lack basic literacy skills. Almost two-thirds of that number is made up of female adults. That’s half a billion young girls who missed out on education compared to 250 million boys. I don’t know about you, but it’s clear that a boy’s education is more favoured in our world. A world that’s progressed so much but still has a lot further to go. You’d say, traditionally, men have always been more favoured in society due to their stronger natural biological make-up. With a bit of insight, it’s actually a bit more complicated. Pile on some greed and insecurity on top of male biology and there you have the roots of gender inequality. Men are considered the breadwinners — hence the gender pay gap as we know it. Therefore, even from a young age, parents start prioritising sons over daughters because that will reap the most profits, especially in places where access to education is restricted. In many developing countries like China, India, Mali, and Niger, it is rather tough for a girl to step a foot into a school because wealth is just not in their favour. In the poorest of regions, parents look to invest wisely in their children to secure their old age. Thus, daughters are a liability to marry off and sons are an asset to keep. This unreasonable tradition has contained many women. To the point where a significant number of ladies around the world believe their purpose is to be subservient to men. UNESCO statistics show that today there are 132 million girls out of education. With 34.5 million girls out of primary education. By the time they reach secondary school a further 32 million girls drop out of education, totaling 67 million young girls out of higher secondary education. These numbers actually convey a much darker story. Growing female youth is seen as a burden. Thus, an 8th grader is married off to a middle-aged man to bear his children whilst being a child herself. This has occurred for centuries — but the future doesn’t look so good either. This is because the majority of the 67 million girls dropping out of secondary education are being trapped in this cycle, only to recreate this in further generations. Even within their homes, young girls have no voice to convey the smallest of choice. Their silence only gives power to the domineering husbands and fathers. Turns out males aren’t strong because of their biological make-up; It’s a woman’s tolerance that’s taken advantage of. Gaining knowledge and exterior perspectives allows us to evaluate our own opinions. Educating our daughters will not only allow them to form opinions but also enhance our perspectives. It will give them enough human instincts to at least respond to unjust brutality. Reports suggest that every extra year of schooling enables girls to care for their own health and any potential children too. Infants born to a mother with just a few years of higher secondary education are less likely to die before the age of 5. Education can reduce infant mortality by 50%. It will allow women to narrow the gender gap by beating the root cause: greed. It’ll enable them to earn an income to stamp down a firm position. And no longer will I be a liability.
https://medium.com/write-like-a-girl/post-pandemic-education-is-the-reality-for-millions-of-girls-today-84550ea0c3c3
[]
2020-07-28 16:15:31.519000+00:00
['Society', 'Culture', 'Women', 'Equality', 'Education']
To Be Continuous: From Monolith to Microservices
In the latest episode of To Be Continuous, Edith and Paul discuss the challenges and benefits of refactoring monolithic applications into microservices. They examine various approaches for creating microservice boundaries and dispel the myth that they should be defined as small as possible. This is episode #38 in the To Be Continuous podcast series all about continuous delivery and software development. This episode of To Be Continuous, brought to you by Heavybit. To learn more about Heavybit, visit heavybit.com. While you’re there, check out their library, home to great educational talks from other developer company founders and industry leaders. TRANSCRIPT Paul Biggar: The idea of having something that you have a slow delivery cycle for some particular reason, or you have a normal delivery cycle but that still takes a day or whatever because there’s code review and that sort of thing. And you want to have something really, really fast. You want to have multiple stages, multiple speeds of delivery based on different needs of different parts of the company, and that sort of thing. Edith Harbaugh: I’m just so excited about this today, started talking about it. I mean this is really the promise of microservice. Like the old way was you had this monolith. The best thing I ever heard was a friend call it disgusting monolith, where everything was all tied together, and if you wanted to change one thing, you had to test everything all together. Paul: Right, and we’ve all been there I’d say. Edith: Yeah and it’s very painful. It’s very painful because then you get into this fix/release cycle. But decoupling things into different components that means that you can start iterating on some of them faster than others. Paul: Right, One of the questions that you get around microservices is, “How do you define microservices boundaries?” And a lot of people just go, “Oh you make them as small as possible” which I think is ridiculous. But the ones that have always made sense to me is that you have service boundaries where there’s different teams, or service boundaries when there’s needs to deploy the service at different pace from the services around it. Edith: Yeah, I mean I don’t think you should decompose for the point of decomposing. Because that’s when you end up with like 8,000 microservices, and you’re like, “I don’t know,” this happens. I think you should decompose when, as you said, there’s a functional reason why something has to move at a different speed. Paul: Right, so the obvious one apart from those two is that it makes logical sense as a dependency for the API. This is a good API boundary for all the things that rely on it. Edith: Yeah, because that releases you basically from release hell. Because you want to have a logical way that you can say, “Okay, this microservices interacts with this one at a certain time in a certain boundary.” A really good example I heard is: There’s this myth that you iterate very frequently on your UI, and I think this is true if you’re in more consumer business where you can get a million people looking at your app and rapidly iterate. If you’re at a company, if you’re a B2B company, and you’re rolling out your own customers, they don’t actually want to have two people in the same customer have different user experiences. It’s very confusing so you have to get a lot of support calls. Paul: So I agree with you in the general case. But having used AB testing, it is an extremely useful tool to roll features out slowly and that’s what I think even in B2B case. Edith: I think it really depends on your B2B. Because I have heard some horror stories about people who try to do it at a B2B, and they get a lot of support calls. You know, “What’s happening? The button I expected there to be is not there.” Or “we just did a whole lot of training on this last week.” Paul: Yeah, and something has changed? Edith: Yeah. Paul: So I think one of the things that’s really valuable about the modern dev tools go to market is that you have both on-prem and on-cloud customers. Edith: Are we just doing buzzword bingo now? Paul: Sure, sure, sure. So when you have customers who use both your cloud service and your downloadable software, the customers who use your downloadable software are the people who are naturally much more conservative. That’s why they’re using your downloadable software in the first place. So you can do your AB testing in the cloud on the cloud customers who are much more early adopters and much more willing to put up with that sort of thing and happier to get the good UX that comes out of it. Edith: Even then most people don’t have sufficient volume to do effective AB testing. Paul: I don’t think that’s true at all. Edith: I think that’s very true. Paul: You don’t have sufficient volume to do minor AB testing like to tell the difference between two different versions of blue, right? That’s something you need Google Scale for. But if you’re trying at a new positioning, you have AB testing. I think the major value when people complain about AB testing, the major value that they overlook is that it tells you that you didn’t fuck something up. Edith: Yeah, I think AB testing is a misnomer. I think it’s really risk tolerance. Paul: Right, so if you’re launching a new messaging or a new onboarding page or something like that that you think is going to deal with your position much better, or it mentions a new product which hadn’t been mentioned before, you want a B version that was just the old version that tells you you didn’t tank your conversions. Edith: Yeah, and the dirty secret is that most changes have no effect at all. Paul: Right, right. But you want to be sure of that. We launched a new, beautiful home screen once or homepage, and it dropped our conversions 20%. Edith: Yeah, I remember. Paul always teases me cause I talk about TripIt. Like we tried a different footer, and it destroyed our conversion. Paul: Oh! A footer. Edith: A footer Paul: Did it have a different call to action or move to call to action or something or provided more calls to action? Edith: The footer had a lot of calls to action. Paul: Yeah, that was exactly the thing with ours. We had, “You can sign up right now you can read the docs or you can do something else.” And it ended up that a lot of people go read the docs, and then just forget about us. Edith: Yeah, and this is the kind of classic product debate then. Do you want them to sign up if all they’re going to do is read the docs? And you can argue about this for hours about qualifying leads, when to get people into the funnel. Paul: I think if you have a drip marketing campaign, you generally want them to cross the line which gets them into drip marketing campaign. Edith: Paul, you’re a marketer! Paul: I was the CEO of a company. I’ve done fucking everything, for three months each. Edith: What was the rotation? If you fucked everything three months each? Paul: Yeah, everything that got done by me was done badly, but it was better than not being done at all. Edith: So what was your rotation? Paul: I did sales. I did marketing. I did PR. I did UX. I did support. I did everything. Edith: What was it like when you were sales? Paul: So fortunately Circle had so much interest and such strong product market fit that sales was literally having a call with the CFO saying the price is this. And he’s like, “Can you lower the price?” “Uh, no. All right, well thanks.” I had a couple of calls that were literally that. They brought some financy person or the manager and they were like, “We demand a discount.” And it’s like, “Yeah, this is a good price we feel.” Edith: So do they end up buying? Paul: Yeah, yeah. I mean, some of them don’t. I hated giving discounts. I felt like we had a really good price, and it was actually kind of cheap for the value that customers got out of it. And so when people came for discounts, I was just like, “No.” Edith: That’s funny. You know there’s the opposite which is you should price a little high expecting that you’ll discount. Paul: Yeah, I think for bottom-up tills, you can’t really do that. Because people won’t ask for it, they’ll just leave. Edith: Ah! Paul: The old oracle way doesn’t apply to our modern bottom-up. I guess, do you have a top-down? Edith: We have both. I mean as soon as you get into procurement department, like their job is to get a discount. Paul: Right, right Edith: So if you don’t give them a discount, they feel like they didn’t do their job, and their like, “What happened here?” Paul: Yeah, and that’s why you end up, I guess in the microservice version of the sales process is you apply a different sales process to different people. And the bottom-up people get the price on the website, and the people who go through their procurement department get another thing. And the fact that you have to talk to their procurement department means that they’re getting charged an outrageous price in the first place to deal with that. Edith: I think the words continuous delivery terrifies a lot of people. Paul: Mmhmm Edith: It’s funny because we do a podcast called.. Paul: To Be Continuous, yeah, I think that’s what it’s called. Edith: I mean let me check the label. Yeah I think people think of continuous delivery is that you have to push out multiple times a day an hour. When really it just means that you need to be able to push out what you want. Paul: Right, right I’ve rarely seen a continuous delivery company that does continuous delivery and uses that to mean we’re only doing it when we can. I think they all push at every version. I find it extremely rare for people to push out, especially in small companies. Maybe there’s a small company bias. Edith: No, I’m talking more at larger orgs. They find continuous delivery terrifying. Because they’re like, “Our processes are not set up to have a new version of the UI every day.” Like, “We need to go train our support people. We need to go do this. We need to update this.” So to go back to what you said, I think there’s different functions within their systems that they would like to be updated at different speeds. Paul: Yep, I agree with that. You can kind of think of the UI as one microservice. You can think of all the other things as different microservices. But the one that I’ve always found, people need to update quickly, but they don’t necessarily want to update it repeatedly, is the marketing pages. So the entire front-end of your thing. You’ll get to a point when you have marketers and the marketers ask for the front-end, which was beautifully built by developers. Edith: With Han Loverdly and Jekyll Paul: I think actually Jekyll is pretty good for them, but like will typically be built-in react, you know whatever the same as your website is it’ll be nicely unified and has an excellent release process. And the marketers will be like, “Yeah, we don’t want to talk to you to make a change here. We need you to put in the optimizely pixel and could you switch this over to WordPress.” Edith: Yeah, cause they want to be able to do it themselves, they don’t want to have to go bug their developers. Paul: Yep, Then they end up with a terrible release process from a developers perspective in that there’s no staging environment. There’s no ability to preview your stuff. There’s no code review. There’s no nice automated releases that have pull request in them. There’s just like someone playing around in Word Press. Edith: I could feel the horror in your voice. Paul: Well, I think it’s easy to fuck things up that way. But it’s better to only have to rely on your team to get something out. When you have to start to rely on other teams, that’s when your deliver really gets slow. Edith: Yeah, I thought of a classic thing where you want to have very quick releases and it’s anything security related. This is actually one of my big things is that you want to be able to constantly patch security vulnerabilities. And the security vulnerabilities that exist today will be different tomorrow, the day after and the day after and the day after. Paul: You generally want to have a way of dealing with security vulnerabilities immediately even though that is extremely scary and very, very dangerous. Edith: But whatever you know today will be different later. Paul: Yep, we had multiple approaches of dealing with different container images. Because a container image would take like 36 hours to roll and to deploy. So we needed to be able to run something on that image. It was partially for security. It was partially for user experience. It was partially because sometimes things broke in the ecosystem. But we had the ability to go change, we called it the pseudo hack. So a command that would run a pseudo. And there was 30, 40 times where we need to make some change to the pseudo hack to support something weird that was going on. Edith: Yeah I think a lesson is if you have to do a fix, how long does it take you to do it? Paul: Right, there’s the actual fix itself, and then there’s the process of getting the fix into production. Edith: Yeah, and if that process is more than a day, you’re in serious trouble. Paul: Yep. Well, I mean if it takes you a day, but you’re able to get it patched really quick, I mean that’s kind of the source of the word patch, right? If you patch it immediately and then you go fix it, and it takes a day, that’s no big deal. Edith: Oh, yeah, but as long as you can patch. Paul: Exactly yeah. So we ended up at various points with a very quick process to update our engine X, so that we could filter particular data or filter particular user types and that kind of thing, and a bunch of different levels to deal with particular security vulnerabilities that might theoretically come up. Not just security vulnerabilities, but also like DDOS. Edith: Yeah, just everything that comes up. Paul: Incidents of whatever kind. Edith: Yeah incidents that are not your core functionality, but are crucial to be quickly addressed. Paul: Right exactly. Edith: So I think I could break down different speeds of which you want stuff to update. Stuff that leaves you vulnerable to the outside world, that’s kind of like your protective cocoon, if there’s a hole breech there, you need to repair it immediately. Or else you’ll end up hacked. Paul: The problem with those is that you also need to maintain those. So you need to have testing around those. You need to have people remember how to do them. If you go to use one of those things, and no ones touched it in six months and it went stale or it rotted in some way, then you’re even more fucked now especially if you’ve provided yourself an accidental back door, or provided your attackers an accidental back door. Edith: Yeah, and then there’s stuff that you want to go at different speeds. Like if you’re doing any sort of billing update, you actually want to be kind of cautious. Paul: Yeah, I mean this is for feature flags I find particularly useful. You enable a billing chains like that for one person and then you watch what happens. You make sure that they’re able to do the thing, and you go very, very cautiously. Edith: Thanks for wearing LaunchDarkly shirt, Paul. Paul: No worries! Feature flags are awesome. Edith: It looks really good on you by the way. It matches your haircut. Paul: Thanks. Edith: I get really excited the more I talk to our LaunchDarkly customers right now because I’ve seen the way just the world has changed even in the couple years that we’ve been around. Paul: What are people using LaunchDarkly for that blew your mind? Or how perhaps are they using it that blew your mind? Edith: I’m pretty jaded so not much blows my mind. But it’s more like they visited a big customer and they said how much less stress they were in and how much less risky their releases were now. Paul: Right, we’ve been saying this for years that Continuous delivery is less risky than monolithic delivery or stage delivery. What did we use to call it when it wasn’t continuous? Edith: Waterfall? Paul: Sure waterfall. I think people didn’t really believe us. Edith: I think there’s still a lot of doubt. I think if you draw like a crossing the chasm type thing, I think we’re still like squarely in the early adopters. Paul: Yep, that’s true. Edith: But I think it’s starting to move. Paul: Yep.
https://medium.com/launchdarkly/to-be-continuous-from-monolith-to-microservices-1eadbdf1b9a0
[]
2018-01-05 01:49:36.259000+00:00
['Microservices', 'Continuous Delivery', 'To Be Continuous', 'Software Development']
Technology and Sex: On Everybody’s Mind These Days
Zero and one is female and male, literally, and figuratively. TechSex Biology and technology share a circular relationship, both dependent on zero and one, one real, the other virtual (both real, both virtual). Genetics and neuroscience are dependent on technology and technology is dependent on a circle (zero and one is circumference and diameter, literally and figuratively). Technologists have to use both in order to algorithmically ‘code’ either (artificial intelligence shares a circle with human intelligence). Literature, with, or without, sex (well, there is no such thing as literature without sex) tells the same story. You need genetics and neuroscience for both. Zero to one is one to zero (zero and one is one and zero). Sex and death share an uber-basic (and necessary) circle. Again, X and Y is zero and one, virtual and real. So, to get to the bottom of everything (and, also, the top) it’s all about the zero and the one. Where did they come from? What are they trying to do, to us, (with us, for us) really? Now we’re into physics (and philosophy). Absolute zero is perpetual motion. Easily proven these days, the conservation of the circle is the core dynamic in nature (zero and one as circumference and diameter) (basis for technology) (and biology). Also, time and space. The circle is conserving itself so technology and biology (time and space) can, also, conserve themselves. (Reproduction in any discipline is the conservation of a circle.) So where are we with this, then? What does it mean? There is no such thing as zero. Zero and one (zero and infinity) are joined, and separated, by pi. This removes the problem called infinity.
https://medium.com/the-circular-theory/how-technology-is-related-to-sex-56f141c15445
['Ilexa Yardley']
2017-06-29 22:45:30.366000+00:00
['Universal Relativity', 'Artificial Intelligence', 'Virtual Reality', 'Data Science', 'Circular Reality']
Go-Funk: Utility functions in your Go code
Go-Funk: Utility functions in your Go code Make your life easier with this library of helper functions. Introduction If you’re coming from languages like Javascript or Python, you could have some surprises programming in Go. You’ll quickly notice that all those functions such as filter , map or reduce are not part of this ecosystem. The fact that you don't have a built-in function to check if an element is part of a slice is even more shocking. When you'd simply do: [1, 2, 3, 4, 5].includes(5) in Javascript or: 5 in [1, 2, 3, 4, 5] in Python, you'll discover that it's a whole other story in Go. The first reflex, of course, is to Google the classical “golang check if element in slice” hoping to find a built-in way of doing such operations. The first links on the result’s page will quickly put an end to your expectation. You’ll learn that Go doesn’t have a built-in way to handle those functions. Fortunately, there is a small library called Go-Funk. Go-funk contains various helper functions such as Filter , Contains , IndexOf , etc. I will cover a few of them in this article and show you how to avoid a few traps. The issue with that kind of helpers is that it’s extremely difficult to make a single generic function that handles all the types while keeping the type safety. Let’s take the Contains function of Go-Funk for example. This function works with all the possible types. Unfortunately, it will be impossible for the compiler to check the types of the values passed as parameters. See by yourself the function's signature: To try to fix this issue, Go-Funk implements specific helper function for the standard Go types. For example, the function ContainsInt exists specifically to handle the case where we'd want to check if an int is in a slice of int . We'll see later that it can be trickier with custom types. Examples Map The Map function iterates through the elements of a slice (or a map) and modifies them. It returns a new array. Here is the result of you run this code: [1 2 3 4 5] [2 3 4 5 6] The result is what we could expect from this code. But if you have a closer look, you’ll see an issue with the type of the newSlice variable: it is interface{} . To fix this problem, we'll make use of type assertions. In order to do this, you'd just have to add .([]int) right after funk.Map(...) . This way, we're telling the Go compiler: "You can't determine what's the type of this value, but I assure you it's a slice of int , so could you please convert it ?". The only issue with type assertion is that the errors aren’t caught during compiling but only during runtime. So you have to be extra careful when using this method. Filter Filter takes a slice and a callback function as an argument. The callback function returns a Boolean. Filter passes the slice's elements one by one through the callback function: if it returns true, the element will be included in the new slice. In this situation, we iterate through the numbers and say: “take all of them except if the number is equal to 2”. Here is the result: [1 2 3 4 5] [1 3 4 5] Note: In that situation again, we had to make use of a type assertion. This will be the case during most of the examples. Reduce The Reduce function takes the following arguments: a slice, a callback function and, an initial value for the accumulator. In Go-Funk, this function returns a float. Here is how you'd use it: As you can see, the function iterates through the slice element by element. The callback function adds the current element to the accumulator. Result: [1 2 3 4 5] 15 With the Reduce function, the type assertion is not necessary. If you have a look at the function signature, you'll notice that it will always return a float . Conclusion This was only an overview of the functions you can find in the Go-Funk package. Even though those are quite practical, I’d recommend to use them only for prototyping: the fact that they can’t handle typing properly (especially custom types) can make your code unsafe in a production environment. If you want to have the same functionalities in your production code, the only solution you’d have is to write your own functions for your custom types. You could still use the Go-Funk type-specific functions for the standard Go types. Even if it’s not the most convenient way at first, it definitely pays in the end, since the compiler will be able to catch type related bugs at compile time. As always, thank you for reading this article. I hope you enjoyed it!
https://medium.com/swlh/go-funk-utility-functions-in-your-go-code-6b3654b10552
['Jérôme Mottet']
2019-06-24 20:53:35.072000+00:00
['Programming', 'Software Development', 'Go', 'Development', 'Golang']
Trump’s CDC is Banning Evictions, and it Probably Will Help Him.
Trump’s CDC is Banning Evictions, and it Probably Will Help Him. His latest faux populist gimmick could work. Carolyn Kaster/AP As I’ve said now countless times before, I’ve struggled with processing just how quickly everything has spiraled out of control over the course of a matter of months. It feels as though with each passing day, the American people have dealt with a new layer of unnecessary trauma and suffering, and the sense of utter despair and hopelessness only intensifies the more I attempt to come to terms with the extent of it. But considering we are fresh in to a new month, the fact that tens of millions of people have been unable to pay their mortgage and rent has been heavy on my mind, and prompted a renewed sense of dread about what lies ahead in the coming weeks and months. Recently, it was reported that the Trump administration will be using the CDC as a means by which to stop the coming wave of evictions. Chris Arnold with NPR writes: “The Trump administration is ordering a halt on evictions nationwide through December for people who have lost work during the pandemic and don’t have other good housing options. The new eviction ban is being enacted through the Centers for Disease Control and Prevention. The goal is to stem the spread of the COVID-19 outbreak, which the agency says in its order “presents a historic threat to public health.” It’s by far the most sweeping move yet by the administration to try to head off a looming wave of evictions of people who have lost their jobs or taken a major blow to their income because of the pandemic. Housing advocates and landlord groups both have been warning that millions of people could soon be put out of their homes through eviction if Congress does not do more to help renters and landlords and reinstate expanded unemployment benefits. …Under the rules of the order, renters have to sign a declaration saying they don’t make more than $99,000 a year — or twice that if filing a joint tax return — and that they have no other option if evicted other than homelessness or living with more people in close proximity.” Of course, while this is an undeniably essential move in this moment in order to prevent the mass humanitarian crisis from coming to fruition, as per usual the American government has put a bandaid on a gushing wound they will no longer be able to hide come January when months worth of back rent is expected to be paid. It should also go without saying that the President hasn’t done this out of the goodness of his heart, and that this has nothing to do with the wellbeing of the American people, but instead everything to do with the preservation of his own power as the impending election grows closer and closer. The thing is, I’m not sure anyone who will immediately benefit from this action is going to care about that at all. In all honesty given the circumstances they’re in, I really cannot blame them. If nothing else, this is yet another indication of the sadistic, criminal, and frankly abusive way the American people have been treated by the very people — in all levels of government — who we are supposed to trust will listen to our concerns, and advocate for the best interests of their constituents. In perhaps the most mild way one could look at it, Americans have been abandoned by our lawmakers. People have been starved in to submission, and robbed of any last shred of stability they have through no fault of their own while politicians go on “recess” on the taxpayer’s dime to solicit donations from their high-dollar donors. While owners of small businesses across the country are wondering how they’re going to get through this, the Treasury secretary is handing out hundreds of billions of dollars worth of taxpayer money to some of the largest corporations in the country. While Jeff Bezos , the founder and CEO of Amazon — a company which doesn’t pay any federal taxes — has nearly doubled his wealth to 200 billion dollars as a direct result of this pandemic, the American people who created his wealth have gotten one time checks of $1200 dollars to get us through. Given the amount of trauma that’s been inflicted on us, is it any wonder that the American people are so desperate for some sort of help, that when they get it from him President Trump is rightfully confident that the absolute bare minimum could be enough to help him to retain his power? As I’ve been saying before, people need something to vote for, not just something to vote against, and for at least some of us, this recent ban on evictions could be that “something”. There’s not a doubt in my mind that in this twisted dystopian hell-scape we find ourselves in, Trump will be able to gaslight a number of people in to forgetting the fact that he helped to create the situation they find themselves in, so long as he reminds them on a consistent basis that he was the one who took the action that allowed them to keep the roof over their child’s head while — as I’m sure he’ll say — congress did nothing. I’m not sure anything could serve as a better testament to just how far this country has sunk than the idea that the shelter of tens of millions of people has been boiled down to nothing more than a partisan, political tool for the electoral aspirations of a six times bankrupt billionaire who happens to hold the highest office in the land. It seems as though every day, there’s more and more reason to grow concerned that he may indeed be re-elected, and I’m still trying to prepare myself for that very real possibility.
https://medium.com/discourse/trumps-cdc-is-banning-evictions-and-it-probably-will-help-him-8bf572fed10b
['Lauren Elizabeth']
2020-09-04 20:12:46.234000+00:00
['Society', 'Election 2020', 'Politics', 'Trump', 'Government']
Kill Your Darlings
Nearly 15 years after its founding, however, Spotify is shaping up to be the very tyrant that it set out to slay; founded as a reaction to rampant music piracy, Spotify is becoming another kind of threat to the stability and success of a diverse and thriving music industry. The platform’s recent struggles attracting marquee artists and rising competition are a reminder that founders who lack conviction in their foundational beliefs create a dissonance between story and business practices that challenges sustainability. At worst, these wandering founders threaten the integrity of the very audiences they set out to support. When Spotify was in its early days in 2006, music piracy was surging as sites like The Pirate Bay and LimeWire provided easy access to thousands of tracks and albums — completely free of charge. Recognizing how this was cannibalizing the music industry, Spotify founders Daniel Ek and Martin Lorentzon officially launched Spotify in 2008. “I realized that you can never legislate away from piracy,” Ek said in an interview with The Telegraph in 2010. “The only way to solve the problem was to create a service that was better than piracy and at the same time compensates the music industry.” Through the Spotify model, the brand licenses music from record labels. Listeners are then able to access these songs and albums, pressing play on any of them for free, with some caveats: ads will play between every other song or so, the audio quality is low, and users are only able to listen to tracks online, not download them to play later. The company also offers a tiered, paid membership program starting at $10 per month that allows users to listen to higher quality, downloadable audio ad-free. From this combined ad and membership revenue, Spotify pays a portion of its earnings to the artist, building what appears to be a new, sustainable music ecosystem: Listeners flock to stream their favorite artist’s tracks and, for every listen, that artist collects a check. This business model so perfectly supports Spotify’s aforementioned mission statement that both audiences the brand seeks to service (listeners and artists) can feel good about their economic relationship — particularly important for a product so intimately connected to emotion. Spotify offers a path for artists to make a living, and for their fans to connect with and find joy in their favorite artist’s creative pursuits. Co-founder and CEO Ek expanded on this mission within the pages of Spotify’s IPO, pledging to create a “cultural platform where professional creators can break free of their medium’s constraints” and “where everyone can enjoy an immersive artistic experience that enables us to empathize with each other and to feel part of a greater whole.” For Ek, Spotify is about creating communities around art, and reshaping the music industry to support those who find value (both emotional and financial) in music — whether making it or consuming it. And, based on Spotify’s runaway success, it would appear that the streaming service has indeed created the harmonious musical ecosystem that Ek and Lorentzon had envisioned in the heydays of piracy. Valued at $46 billion, Spotify has nearly 300 million monthly active users — of which nearly half are paying for the company’s ad-free subscription — and 3 million artists on its platform as of 2018. In addition to expanding to host live events and performances, the brand has also inked exclusive deals with Michelle and Barack Obama, DC Comics, and podcast powerhouse personality Joe Rogan. As the streaming service has grown in popularity, listeners have praised Spotify’s extensive library, affordability, and accessibility through a variety of different devices. If Spotify’s exclusive focus was on serving listeners, this might be enough. But the brand has designated artists to be as vital as those consuming their music within its story, which means both audiences must remain enthused by the platform for its long-term success. And from the artists’ point of view, things haven’t been as rosy. When, in early 2019, Spotify made it possible for listeners to share an infographic of their most listened to artists and tracks of 2018 (called “Wrapped”), artists also had the chance to share their yearly insights. For Zoë Keating, a cellist out of Vermont, this gave her the opportunity to not only post how many streamed her music or what countries they were from; Keating also shared with her fans the profit she made from 2 million streams, or 190,000 hours of music: $12,231. At about half a penny per stream, it’s difficult to see how even a successful artist like Keating — and any others who aren’t Drake or Billie Eilish — could expect to “live off their art,” as Spotify puts it. Keating is hardly the first or most prominent artist to criticize Spotify’s failure to deliver on its promise to creators. Thom Yorke of Radiohead and Taylor Swift have both withheld or withdrawn their music from the streaming service over claims of unfair compensation to artists. Jay-Z went as far as to create an alternate, artist-owned streaming platform, Tidal, where most of his and his wife Beyoncé’s music can be found today. This paltry compensation for many artists is especially baffling in the context of the thriving music industry: In 2018 overall music revenue increased $2.2 billion from 2017 to a total of $18.8 billion. There’s reason to think that these critiques from artists and their decision to flock to competitive services is weighing on Spotify in at least some way. While the platform’s user base continues to grow, that growth is increasingly coming from new markets; the number of North American subscribers in the third quarter of 2020 was almost identical to a year before. The same emotional connection to artists and their music that draws people onto Spotify means critiques from those artists carry significant weight with users. Superficially, the Spotify ecosystem appears to be aligned with its purpose. But artist discontent is rooted in a crucial detail: a payment structure that completely divorces the financial support of fans from the artists they love. Known as a “pro rata” model, Spotify’s artists are compensated not by their fans directly, but rather based on their overall share of listeners’ ears. This means that all revenue is funneled into one big pot, and every artists’ streams are compared to one another; if one artist is getting 5 percent of those overall streams, that artist receives 5 percent of the pot. It’s a system that doesn’t even handsomely reward top-tier artists, and makes it near-impossible for artists like Keating to make a living on her streaming earnings. In a deep dive from Rolling Stone, columnist Tim Ingham did the math on just how much Spotify’s highest performing artists stand to make. According to Spotify, 43,000 artists net 90 percent of streams; those within that top tier stand to make only about $90,000 a year on streaming. On top of this, most of this money is likely going to a smaller number of music superstars, and varying structures across different labels make the financial reality much more opaque. For the “bottom tier” of artists, however — more than 98 percent of the total number of artists on Spotify — the returns calculate to about $12 per month, barely enough for the artist to afford a Spotify subscription of their own. This race to increase an artist’s share of streams is having second-order effects in the music industry that work to undermine both listeners and creators. Many artists feel compelled to release music at a faster rate than before — a move encouraged by Ek, who recently gave an interview criticizing those musicians only releasing music “once every three to four years” and insisted it’s about “putting the work in.” The songs that artists do release are increasingly becoming shorter and shorter. Musicians are encouraged to forefront catchy choruses right off the bat to win streamers, undermining their creative output. “I really do feel like the flattening and watering down of the experience of musical community and fandom is one of the biggest issues [of streaming],” Liz Pelly, a journalist who covers streaming, told NPR. “We’re in this moment where artists on every level are expected to think this way that, in the past, would have been a way of thinking about artists that are gonna be on the Top 40 radio. And now all artists are expected to be beholden to the mechanisms of pop music in a sense.” When talking with Fast Company about why she chose not to release her 2015 album Vulnicura on Spotify, Icelandic artist Björk said: “This streaming thing just does not feel right. … To work on something for two or three years and then just, Oh, here it is for free. It’s not about the money; it’s about respect, you know? Respect for the craft and the amount of work you put into it.” Björk is prominent enough of an artist to earn real money on Spotify. If she does not feel the platform respects her creative output, and other superstars feel similarly, it’s worth wondering how long before users reach the same conclusion. By undermining a fan’s opportunity to support their favorite artists — making it impossible for those fans to directly contribute to those artists financially and, therefore, making it harder for them to continue making music — Spotify erodes the emotional connection inherent to its story, and challenges the brand’s ability to be the dominant player in streaming music long-term. Many companies start off with a purpose that goes beyond simply selling a product or providing a service. As is the case with Spotify, this purpose is animated and carried forth by the company’s founder or founders, who can allow this core belief to guide their decision making and, ultimately, the company’s growth. When this belief is codified by a strategic brand narrative, it makes it easy for an organization to evolve in a way that aligns with its founding purpose. Without a strategy to put that purpose into action — as the classic song goes, you always hurt the one you love. Hannah is an associate at Woden. Want to stay connected? Add Hannah on LinkedIn, read our extensive guide on how to craft your organization’s narrative, or send us an email at connect@wodenworks.com to discuss whatever your storytelling needs may be.
https://medium.com/swlh/kill-your-darlings-f7238b2119f8
[]
2020-11-15 13:18:19.797000+00:00
['Leadership', 'Streaming', 'Brand Strategy', 'Growth', 'Music']
Innovation in a time of uncertainty
Innovation in a time of uncertainty In the age of Coronavirus, businesses and decision makers will need to think differently about how they strategise and steer their organisations through a more uncertain landscape. Markets for transport and the built environment traditionally have exceptionally long lead times and planning cycles. Factors such as population growth, employment, economic forecasts and any number of other established metrics can all feed into decisions around the requirements for new infrastructure and services. For years these metrics have been relatively stable and to some degree, predictable. Not so today. Our nation, government and organisations will need to examine closely things that until recently would have been considered gospel. We have not historically needed to question official population projections from the Office for National Statistics or congestion forecasts from the Department for Transport when developing plans. However, in the short to medium term at least, we could be looking at a situation in which we have to completely rethink our modus operandi as a country and how we support innovation for UK Plc. What might the advent of COVID-19 mean for the innovation economy? The widely forecast economic slowdown is likely reduce the appetite for risk across a number of sectors. Innovations, particularly early stage applications of research, inevitably come with a greater element of risk than established technologies. Non-traditional approaches to procurement (explored in greater detail in our Challenging Procurement initiative) can help manage risk for buyers and should be encouraged to avoid a general retreat from investment in new, innovative solutions. Just as there is a tendency for firms to gravitate towards established solutions and players in tough times, we may also see companies slashing R&D budgets and retrenching back to core offerings in order to weather the storm. This would be bad news for innovators. However, as organisations face difficult times and tighter than ever operating margins, they also face a greater than usual need to do things differently. Innovators who can evidence that their solution can deliver greater efficiency and productivity from existing assets stand to benefit. What should innovators do to navigate the current complexity? Plan for different outcomes This is not the time for linear cause and effect planning. Businesses and place leaders alike need to consider a far wider number of impacts, outcomes and problems and opportunities in order to be ready for them as and when they occur. Reconsider your offerings The environment and challenges facing organisations are changing and fluid. Business seeking to seize opportunities that arise need to listen carefully to understand the new market needs and adapt offerings accordingly. Pulling the levers that worked pre-Coronavirus will not necessarily solve the problems places now face. In some cases, this might mean looking at solutions from completely the opposite direction: for instance, services designed to track and maximise the smooth flow of people through a space may now be repurposed to reduce crowding and help maintain social distancing. Flexibility in everything Things are likely to continue changing rapidly and will be difficult to predict. Businesses and place leaders alike will need to operate in collaboration with others, harness data and indicators to track change and be ready to adapt rapidly as the situation evolves. In a volatile, uncertain, complex and ambiguous world, planning will only get us so far; leaders will also need to be prepared to respond as things develop. Danger/Opportunity Even after four months of lockdown, uncertainty about the future remains. The coming months will present further challenges, but the strategies outlined above will help maintain the ‘high flex’ approach necessary to tackle the challenges and leverage the opportunities as they arrive.
https://medium.com/connectedplaces/innovation-in-an-age-of-uncertainty-66fa80273ada
['Alfred Jackson']
2020-07-06 07:46:01.449000+00:00
['Covid 19', 'Leadership', 'Business', 'Strategy', 'Coronavirus']
COYO Hackathon: Audio Messages (for the Android platform)
1. General Idea My employer COYO offers a mobile app called ‘Engage’ that lets employees easily communicate with each other including a messaging functionality like you might know from WhatsApp or other messengers. One very commonly asked feature is Audio Messages. This is extremely useful especially when you quickly want to share something without tipping endless texts. As part of a 3-days Hackathon I decided to add this little nice feature to our app for the Android platform without relying too much on any libraries and instead using plain Android functionality. The app is entirely written in Kotlin. The basic messaging functionality is already implemented. We will focus on adding audio messages here. The feature basically has the following requirements: The user is able to record his own voice and send it as a message The user can listen to his own and received audio messages The user sees a progress bar for the duration The user can jump to a specific point of the audio message Given situation: The chat part of the app is based on a general Recycler View Component. Each message type is presented by a different view holder in the respective adapter. There already exist types like pure text messages, pictures & messages with attachments. Now we want to add a new message type for audio messages. In order to get a general idea how it looks like and where we are going to end have a look on the following screenshot: We got a microphone in the input bar of the chat to start the recording and a message with a play button to play the actual recording. 2. Implementation part 2.1 Playing audio records In our respective onCreateViewHolder method of the respective adapter used for the recycler view we are going to consider a new ViewHolder type for our audio messages: The general layout we use for our message is based on a linear layout. We are going to add some views to our layout. Our pressPlay View which will initialize as button view later on to start and stop the played audio message. Moreover we add a seekbar view that shows the progress respectively the recent position of the audio playing. For the actual audio playing we introduce a plain MediaPlayer. There might exist better players i.e. https://github.com/google/ExoPlayer for more functionality but the plain Android MediaPlayer provides everything we need here right now. In our case we might want to play audio from a received message or a local recording. In order to consider both cases we are looking for the URI first which in our case might a path to file on the web or a local URI i.e. for a locally created record or available audio file. Now comes the most trickiest part — showing the progress in the seekbar! Unfortunately Android provides nothing out of the box here, so we are going to implement our visualization on our own which looks like that: 2.1.1 Get the current position in the audio record and apply it to the progress of our seekbar What this does is basically getting the overall duration of the audio record. Accordingly we are going to apply the current position of the audio to the progress attribute of the seekbar! Important to mention here is that we do that using a coroutine on a different context to not block the UI Thread of the app here! Otherwise the user is not able to do anything here while the audio record keeps playing. As you might know you cannot reference and update a view straight from a coroutine or other thread. For this we use view.post that updates the view accordingly on the UI Thread. 2.1.2 Apply a seekbar change listener to jump to the current position of the audio record onProgressChanged helps us to determine the current position of the seekbar progress — not the audio record. When it stays 0 zero or the total duration of the audio is reached we show the play button otherwise a pause button. (we use the same button view but just switch the background here) onStopTrackingTouch determines the users taps somewhere in the progress bar. Accordingly we apply the progress position to our MediaPlayer and it continues to play from that respective position. Now we are able to jump back and forth in the audio record. If you would like to know more about the MediaPlayer and the Seekbar you can find it right here: https://developer.android.com/reference/android/media/MediaPlayer https://developer.android.com/reference/android/widget/SeekBar 2.2 Recording audio Again we are using plain Android components here. Before the app is actually able to use the Android audio recording functionality we need to add the following to our AndroidManifest: <uses-permission android:name="android.permission.RECORD_AUDIO" /> Moreover we add an instance of the Android MediaRecorder class to our service class that keeps track of the recording functionality. More information regarding the recorders functionality can be found here: https://developer.android.com/reference/android/media/MediaRecorder Afterwards we added a basic ImageButton here with a microphone to our chat input bar. The functionality is very basic. If the user taps on the microphone the recording starts. A second tap ends the recording. We use a basic clickHandler here that is associated to our microphone view. The recording is saved on the local storage for our device as an m4a file. The startRecordingAudio method looks like that: What is very important to mention here is that the user is asked to confirm the apps access permission to be able record audio on the device once he starts using the functionality for the first time. Accordingly we set the audio input source with, setAudioSource(MediaRecorder.AudioSource.MIC) define the output format for the record setOutputFormat(MediaRecorder.OutputFormat.MPEG_4) and set a common used encoder setAudioEncoder(MediaRecorder.AudioEncoder.AAC) The stopRecording method looks like that: The recorder stops recording and we add the generated file as an attachment to our created chat message.That’s it! 3. Summary After finding out how those Android components like Seekbar, MediaRecorder & Player work in particular, the remaining implementation in our apps context was very straight-forward. I recommend to simply start using those components and play a bit around with it! Moreover there exist a lot of basic tutorials and code snippets on all known platforms. I hope you enjoyed my article and get some useful insights for your next app project. Take care! Alex
https://medium.com/coyo-tech/coyo-hackathon-audio-messages-for-the-android-platform-755a06bb97b3
['Alexander Prodan']
2020-06-29 13:16:49.519000+00:00
['Android', 'Software Development', 'Mobile', 'Hackathons', 'Mobile App Development']
How we build data visualizations for a global audience
The Institute for Health Metrics and Evaluation (IHME) is an independent research organization whose mission is to improve the health of the world’s populations by providing the best information on population health. A simpler way to summarize what IHME does is: We collect, process, and distribute big data for global population health. Part of that mission is to distribute data in ways that inform a diverse user group including expert global health researchers, ministers of health, policymakers, and the general public. To that end, we provide a suite of more than twenty data visualization products that run the gamut from static charts to highly interactive tools with dozens of controls and chart types. The variation in our data visualizations reflects the diverse needs of our audiences, from the layperson who prefers a curated story to the expert user who wants the flexibility to ask their own questions of our population health datasets. The first few entries in a list of IHME’s more than twenty data visualization tools. On the Data Visualization team, we commonly get this question from users of our products: How did you build that visualization? The goal of this article is to answer that question in as much depth as we can provide, and hopefully provide a starting point for anyone interested in building their own web-based visualization. TL;DR Rather than using an off-the-shelf data visualization platform, we build bespoke data visualizations using JavaScript, D3, and React, including two of our own open source libraries: IHME-UI and ScrollyTeller Our back end technologies have historically used the LAMP stack, but we are moving toward microservices architectures with NGINX and Node/Express back ends Who built that? Our visualizations are created by the Data Visualization team, which currently consists of six developers, a technical product manager, a development manager, and a team lead. We are also supported by multiple technology teams including database developers that maintain our MySQL databases, an infrastructure team that manages the hardware and virtual environments we use to deploy our web technology, and a central computation team that handles organization-wide computational assets. What we build is guided by requests from health data researchers at IHME, external collaborators at health ministries, academic institutions, and the general public, who also supply IHME’s data in the first place. In that sense, the entire IHME community has a hand in building the tools we create. Functionally speaking, the Data Visualization team at IHME operates as many software development teams do, using agile development practices, two-week sprints, and code reviews. We track work efforts using agile project management software and use Git for source control. Our developers are very much Full Stack in the sense that we write our own API’s, create and manage small databases, write the visualization code, and manage application deployments. How we build it A major difficulty in summarizing how we build visualizations is that it’s not entirely consistent across our twenty-plus applications. Like many organizations that have been building web applications for more than a decade, we have a mix of legacy code and newer technologies that we are moving toward. With that said, the following sections generalize the most common technologies we use on the front end and back end of our applications. How do you make those impressive charts? Most people asking “How did you build that?” are probably most focused on how we created the chart itself. In other words, “How did you convert rows of data into that great bar chart/tree map/line chart/scatter plot?”. A pyramid chart view of death rates by cause for selected countries in the GBD Compare visualization. The short answer is: from scratch using JavaScript and CSS. The longer answer is: we use custom D3.js code in older applications, and commonly use D3.js in combination with React.js in our newer applications. Vanilla JavaScript & D3.js Some examples of D3.js code can be found on bl.ocks.org or ObservableHQ.com. Many of these code samples offer a good starting point for learning code patterns to build charts in JavaScript, but generally aren’t modular enough, don’t handle component state very well, and don’t follow our code style practices (a modified AirBnB style) to be very usable. Thus, many of our older D3.js components are coded from scratch and are not open source. This US map and a derived choropleth on ObservableHQ are examples with code patterns similar to the way some of our internal D3.js code is structured. React with D3.js To standardize some of our visualization code, the Data Visualization team created an open source repository called IHME-UI. Like many React-based data visualization frameworks, IHME-UI uses React to manage component rendering to the DOM, while leveraging D3.js for low-level chart scaling, layout, and map transformation functionality. Elijah Meeks has an excellent discussion of the tradeoffs of this approach in his article and book on the subject. Amelia Wattenberger also has an excellent instructional blog post on the topic. The example below is from the IHME-UI demo files, and shows how a chart is composed from individual React components that represent different parts of the chart such as scales and symbols (lines in this case). An <AxisChart> component encloses <XAxis> , <YAxis> , and <MultiLine> components to compose a complete line chart. The <XAxis> and <YAxis> components leverage D3.js to compute transformations from x/y space to pixel space, and format ticks and tick labels. The <MultiLine> component uses D3.’s svg path function to place the lines in the appropriate position in pixel space within the svg element. An example from the demo files of IHME-UI showing React code to generate the chart. IHME-UI was created primarily to unify the look, feel, and behavior of some IHME charts, but is far from complete in terms of its available chart types. For those interested in building some of the more common chart types using React, several other React-based charting libraries like Victory, Semiotic, React-Vis, and Recharts use similar approaches to customizing axes and chart symbols. Other React-based libraries like Nivo offer higher level implementations of individual charts where props determine axis and shape behaviors. Scrollytelling Scrolly-what? Several of our visualizations, such as the child mortality and tobacco control visualizations, are designed to guide the user through a story on a given topic, with the primary interaction being that the user scrolls to continue through the story. Whereas many of IHME’s visualizations are exploratory to allow experienced users to ask their own questions of data, scrollytelling visualizations are explanatory to appeal to users with less experience in a given global health topic. A scrolling data story about mapping global child mortality rates. To create these visualizations, our team wrote an open source library called ScrollyTeller, which provides a framework for dynamically creating a scrolling data story from configuration files and tabular data containing the story “narration”. See this link for a scrollytelling data story that explains how ScrollyTeller works. What about the back end? As most web developers are well aware, the front end code that renders our visualizations couldn’t exist on the web without a significant amount of development infrastructure. Visualizations with interactive, dynamically changing charts and multiple views require robust web technology, with flexible backend APIs to organize and serve the data from databases, not to mention varying amounts of web traffic, data caching, etc. We won’t go into too much detail about how we deploy, but for anyone interested, we describe some generalities about the types of development methods we use. In other words, what does our stack look like? A typical stack: LAMP & JavaScript front end The LAMP acronym might not mean much to those unfamiliar with web technologies. In our case, it stands for Linux, Apache, MySQL, PHP, which forms both the web server and the back end API for many of our older applications. In most cases, these projects are monorepos containing all of the code necessary to build and deploy the applications. An easy way to explore our stack is to break down the file structure of one of these monorepos for a typical project. The main components are: An index.php file, which is the entry point into the web server, and routes the web and api server using Apache or sometimes using the Slim PHP framework. file, which is the entry point into the web server, and routes the web and api server using Apache or sometimes using the Slim PHP framework. An api folder to host backend PHP files. The API connects to IHME’s MySQL databases and uses SQL queries to query data via http routes consumable by the front end. folder to host backend PHP files. The API connects to IHME’s MySQL databases and uses SQL queries to query data via http routes consumable by the front end. A php-templates folder to serve the base web components (usually just header, footer, body in HTML format). folder to serve the base web components (usually just header, footer, body in HTML format). A Docker folder + Jenkinsfile to support automated builds via Jenkins, a well known open source automation server. The Jenkinsfile is written in Groovy and orchestrates the containerization of the Linux/Apache web server environment, which is deployed via Rancher. folder + to support automated builds via Jenkins, a well known open source automation server. The Jenkinsfile is written in Groovy and orchestrates the containerization of the Linux/Apache web server environment, which is deployed via Rancher. A src directory containing the front end JavaScript code, CSS, and any static resources such as images or icons. Most of our D3.js (and/or React) code that renders the data visualizations is here, along with any CSS and sometimes static resources like images and icons. directory containing the front end JavaScript code, CSS, and any static resources such as images or icons. Most of our D3.js (and/or React) code that renders the data visualizations is here, along with any CSS and sometimes static resources like images and icons. A variety of .files and other configuration files that configure Composer (PHP dependency management), Node (JavaScript dependency management), and Webpack/Babel for transpiling and bundling our front end code. and other configuration files that configure Composer (PHP dependency management), Node (JavaScript dependency management), and Webpack/Babel for transpiling and bundling our front end code. A README.md file to tell our developers how to set up the project. File structure for a generic IHME-application to illustrate API, source code, developer tools, and deployment. This type of architecture has served us well, and because many of our projects are structured in this way, we can get up and running relatively quickly with this stack. That said, many of our developers like to work exclusively in JavaScript and sometimes Python, which has prompted us to explore some different project setups. A more modern stack: LEMN(?) stack & React front end Many of our more recent projects have replaced Apache web servers with NGINX, and replaced PHP/Slim back end servers with Node/Express. Thus, many of our stacks use variations of Linux/NGINX/MySQL/Node. In most cases, we still stick to using monorepos, but break each of the services up into their own containers that run separate Node or Apache servers. The main components of this stack are: An app directory containing the front end JavaScript code. The typical React/Redux file structure may look familiar here, and may vary depending on the project. Upon deployment, the app files are bundled and copied to a separate app container (using Docker/app.Dockerfile ). directory containing the front end JavaScript code. The typical React/Redux file structure may look familiar here, and may vary depending on the project. Upon deployment, the app files are bundled and copied to a separate app container (using ). A variety of .files and other configuration files that configure Node (JavaScript dependency management), and Webpack/Babel for transpiling and bundling our front end code. and other configuration files that configure Node (JavaScript dependency management), and Webpack/Babel for transpiling and bundling our front end code. A server folder that hosts an Express backend. In this case, the backend is a separate Node/Express application that is built into its own container upon deployment (using Docker/api.Dockerfile ). folder that hosts an Express backend. In this case, the backend is a separate Node/Express application that is built into its own container upon deployment (using ). A Docker folder + Jenkinsfile that contain code to containerize the Linux/NGINX web server environment, and configure NGINX to route (proxy) traffic to each of the app and api containers. Again, all of this is deployed via Rancher. folder + that contain code to containerize the Linux/NGINX web server environment, and configure NGINX to route (proxy) traffic to each of the app and api containers. Again, all of this is deployed via Rancher. It’s worth noting that this configuration still uses a containerized version of Apache (hence the index.php file) for the web server, which exists primarily to conform with some existing Apache-based infrastructure at IHME. file) for the web server, which exists primarily to conform with some existing Apache-based infrastructure at IHME. A README.md file to tell our developers how to set up the project. File structure for more modern applications using NGINX/Express/Node back ends. Microservices architecture in Local Burden of Disease visualizations The map-based Local Burden of Disease visualization. A notable exception to our development environments are the Local Burden of Disease visualizations, which are map based and are deployed using a microservices architecture with completely separate components in different code repositories. Because the same codebase is used to display many different health indicators, the codebases must also be highly configurable. Configuration is accomplished via a JSON based configuration language that defines the dimensions and shape of the data, database specifications, and how different controls and chart components should be rendered in the view. This stack is partially open sourced as the Choroscope platform, which is summarized nicely here. On the front end, the Local Burden of Disease visualizations also utilize Leaflet.js, or more specifically, React-leaflet to handle functionalities such as basemaps, map layering, and zooming/panning. Got all that? Hopefully this gives a flavor for the complexities of building, deploying, and maintaining the twenty-plus visualizations that IHME distributes. Please visit the IHME website for the latest on what the organization is up to, our data visualizations page for a complete list of our tools, or our careers page if you are interested in working with us.
https://medium.com/ihme-tech/how-we-build-data-visualizations-for-a-global-audience-550b2cb7e4e6
['Ryan Shackleton']
2020-03-02 21:18:29.278000+00:00
['Big Data', 'Data Storytelling', 'D3js', 'Data Science', 'Data Visualization']
How I Radically Changed My Lifestyle with CrossFit
How I Radically Changed My Lifestyle with CrossFit Thank you CrossFit Vienna :) Photo by Victor Freitas on Unsplash I thought I knew a thing or two about keeping yourself fit and going to the gym, but when I started feeling out of breath walking a flight of stairs, I was a little concerned. I had previously expressed my unhappiness with my gym routine to my colleague. When I go to the gym, the exercises are mostly monotonous, and I was alone. Coupling that with me breathless because of climbing made me try to find an alternative. My colleague had recommended CrossFit, and the only thing I knew of it was that it was hard. I knew beforehand that it had become a global phenomenon now, with many people grouping to commit to a fitness goal, which helps with sticking to them quickly and motivates you. I just was not fully versed with the vernacular, such as Olympic lift, WOD, etc. Since I started doing CrossFit, I have seen how much of a different person I have become. Not only do I have new friendships introduced to me by this new workout regimen, but it also keeps me physically fit without having to brace the awkward loneliness, a typical gym does. What is CrossFit? Because it got the sensational hype because of media and social media, I too only knew that CrossF it was a series of “constantly varied, functional movements” which are “executed at high intensity.” But in its true essence, CrossFit means a lot more than these generic definitions. Over time, I have experienced that CrossFit takes the best aspects of both fitness and sports and smashes them into one bowl. The CrossFit movements seldom repeat, so you get a new workout every day. It keeps things exciting and non-monotonous — something I was looking for in an activity. The trainers are cordial and very generous with teaching the movements correctly. They also train people to strengthen the activities we would typically do to do daily life tasks, e.g., picking up something from the ground or putting something overhead to engage the same muscle sets. Photo by Victor Freitas on Unsplash My First Box When I went to my first CrossFit trial, I was knocked out after the session. It was incredibly intense and dynamic. We were a group of 3 people with a trainer who showed us the exercises. If one of us did the exercises wrong, he would show in detail how to do it correctly. Throughout my first CrossFit workout, I performed a combination of activities that our trainer told us fell in Monostructural, Gymnastics, and Weightlifting categories. Monostructural exercises or movements were very similar to regular cardio, e.g., running, rowing, jumping rope, etc. Gymnastics exercises included push-ups, handstands, muscle-ups, etc. In the Weightlifting category, we would use aided weights to push targeted muscles to exhaustion. With these three categories in a mix, I can see how CrossFit aims to equip us with a jack of all trades approach, rather than mastering a particular skill, and it comes out to be worth it, at least for me. Photo by CrossFit-Vienna on Eversport Basic Vernacular It is fascinating to know that CrossFitters have a different language for their regimen, and it gives you a sense of inclusion if you know some or all of it. I learned that a CrossFit gym is called a “box.” A box may seem like a basic gym to some, but for a CrossFitter, it cannot be any less than heaven. I also learned some other terminologies such as WOD, which means “workout of the day,” AMRAP, which means “as many reps/rounds as possible,” and Score, which means the total number reps completed during a given workout. Common CrossFit Movement Exercises For beginners, it is better to start with smaller weights and gradually increase the exertion levels. You can also choose the severity of your power fitness level based on the following common movements: Plyometric jumping Olympic weightlifting Kettlebell exercises Explosive Bodyweight Movements While many different boxes might have different variations, I have performed a variety of exercises that include running 400 meters, kettlebell swings, pull-ups, rowing for 1000 meters, thrusters, wall balls, sumo deadlifts, box jumps, push press, etc. Cost CrossFit is an expensive affair. I paid €140 per month — which is $171– and it is costlier than my gym membership, which only cost me €20 a month ($25). I often think about if this expensive switch was worth it or not, and over time, I have realized that it was one of the best decisions I ever made. I feel that I am mentally as well as physically at the top of my fitness levels. I feel energetic throughout the day, and my productivity levels are higher than usual. Even after a single session, I would feel my creative juices flowing, and I would be ready to focus on my work. Even though CrossFit is expensive, if you see a difference in your life, I would suggest that you give it a try as well. Benefits of CrossFit Over the months that I did CrossFit almost religiously, I felt great health benefits that made my life completely different. Some of them are as follows: You Get Stronger Spoiler alert: doing CrossFit makes you stronger. Lifting weights, resistance training, and gymnastic movements combined to make anyone physically stronger and fit. Since we do compound movements in CrossFit, it involves nearly all the muscles of our body, and thus, we are strengthening muscles from head to toe as a result. It comes in the form of a simple squat or a deadlift. A 2017 study found that multi-joint (compound) exercises are more useful for strength increase than single-joint exercises, e.g., bicep curls, calf raises, etc. Improves Aerobic Fitness CrossFit exercises often get broadly categorized as high-intensity power training or commonly referred to as HIPT exercises. This training increases the maximum amount of oxygen a person can consume during a heavy intensity activity. Some researchers stress that CrossFit improves overall aerobic fitness because these exercises directly impact your breathing intake over time. Some researchers disagree, however, so there is a further need to clarify the point. Ideal for Weight Loss Many people struggle with weight loss, and since CrossFit manages to incorporate a variety of body movements that need higher doses of muscle ATP, people choose CrossFit for serious calorie-burnout. The research was done on the University of Wisconsin’s La Crosse team found that the players who did CrossFit consumed 12 or more calories per minute than their teammates. The compound exercises use a great deal of energy, forcing our bodies to utilize fats for life, leading to weight loss. CrossFit’s plus point is that you do not only have to lift weights to gain the benefits of compound moves; simple bodyweight CrossFit exercises can serve the same results. Positive Effects on Metabolism During CrossFit, I noticed that I gained muscle mass. When you gain muscle mass, it means your metabolism rate becomes much faster. A faster metabolism rate means that you burn more calories per day, all day long. The more intense the exercise you do, the more oxygen your body consumes after finishing a session, triggering the “afterburn effect.” This post-exercise effect results in five to fifteen percent of the calories burnt during exercise, which is an impressive feature. Improves Agility, Balance, and Flexibility CrossFit exercises often include exercises that involve movements that we do every day. These movements are called functional movements, and they help improve body agility, balance, and flexibility. These movements also reduce the risk of injuries and improve the quality of life as we age. Photo by Thandy Yung on Unsplash Is CrossFit Safe? I was worried about getting injured because of the high-paced, heavy intensity exercises, and one quick Google search made me almost give up my newfound fitness craze. However, my trainer assured me that if I perform the activities with a proper form, did my movements carefully and with moderate speed, and lifted what I could in a session, I was good to go. If you are pregnant and already practice CrossFit, it is fine to continue, but it is better to contact your doctor. If you are injured, you should stay away from CrossFit until you fully recover. You might also need to work with a physical therapist first to overcome your muscle degeneration or bone loss. Trainers often tell people over the age of 65 not to pursue CrossFit as it may not be safe for them. If someone is physically fit, they can start CrossFit after their physical trainers and doctors’ approval. If you do not want to be physically present in a box due to the current world conditions, you can be safe in your home’s comfort because a lot of exercises do not need any equipment. For me, I was always looking forward to human contact with my fellow CrossFit members. CrossFit Vienna, where I used to go, is one of the best places to go for physical fitness in Vienna, Austria. If you want to know more about them, you can visit https://www.crossfitvienna.at/. Now that I am starting my new life in Bali, Indonesia, I will continue my CrossFit journey there, for I have seen how my life has changed for the best.
https://medium.com/in-fitness-and-in-health/how-i-radically-changed-my-lifestyle-with-crossfit-10cee6d17da6
['Changwon C.']
2020-12-22 15:14:32.705000+00:00
['Sports', 'Weight Loss', 'Gym', 'Health', 'CrossFit']
Best Mary J Blige Songs: 20 Essentials From The Queen Of Hip-Hop Soul
Da’Shan Smith Photo courtesy of Republic Records Over the course of her decades-spanning discography, Mary J Blige has been a conduit for communal pain and healing. She’s shared her world and given generously of herself over 13 studio albums, and remains a singular force in R&B. Starting out as the artist who showed the mainstream how to successfully blend New Jack Swing into a more soulful brand of hip-hop-based R&B, Blige continued to evolve her sound with every decade, as others followed her lead. From her early beginnings in the 90s to her continued impact on pop music through the 00s, and her victory lap in the 2010s, Mary J Blige is one of R&B’s brightest and most innovative vocalists. The best Mary J Blige songs tell the story of her artistry — as revealed by these 20 essential tracks. Think we’ve missed one of your best Mary J Blige songs? Let us know in the comments section, below. Listen to the best of Mary J Blige on Apple Music and Spotify, and scroll down for our 20 best Mary J Blige songs. Best Mary J Blige Songs: 20 Essentials From The Queen Of Hip-Hop Soul 20: ‘Deep Inside’ Even after seven years of soul-bearing music, Blige had barely scratched the surface when it came to revealing herself as an artist and a person. But on her fourth studio album, 1999’s, , she realised you can give too much of yourself away. Built around a piano loop from Elton John’s ‘Bennie And The Jets’ (the music legend replayed the part himself for the recording), ‘Deep Inside’ addresses all the people who wanted a piece of the singer. “The problem is for many years/I’ve lived my life publicly/And every time I find someone I like, gotta worry about/If it’s really me that they see,” she sings. 19: ‘Take Me As I Am’ Coming along six years after ‘Deep Inside’ was the hit single ‘Take Me As I Am’, from Blige’s seventh studio album, The Breakthrough. Over the whimsical melody of Lonnie Liston Smith’s ‘Garden Of Peace’, Mary confesses, “She’s been down and out/She’s been wrote about/She’s been talked about, constantly.’ Throughout the song, Blige switches from protagonist to observer and back again. It’s another tale of survival, but, this time around, Blige is singing from the bright side of the tunnel, knowing she “put my life all up in these songs”. 18: ‘Your Child’ The most enduring Mary J Blige songs are built upon lyrical storytelling over dramatic instrumentals. The Mary album does not lack in that department, as evidenced by ‘Your Child’, for which Blige takes what could have been a clichéd ballad and turns it into something more complex. When faced with a partner who has a child outside of the relationship, it becomes more of a story of humanising the “other woman”, and letting the listener heal with them. Sounds ripe for a dance remix, right? Maybe not. But even Mary’s tales of heartbreak can burn up the Billboard Hot Dance Club Play chart — as ‘Your Child’ did when it hit №1, in 2000. 17: ‘Enough Cryin’ The Breakthrough turned out to be a liberating record that redefined Blige’s career on her own terms. While she channelled the hurt exhibited on her previous six albums, The Breakthrough could be seen as a triumphant comeback from a wiser, more mature R&B and pop maven. Over a Darkchild beat, ‘Enough Cryin’ made good on its promise. Most Mary J Blige songs are based on a hip-hop/soul fusion, but on ‘Enough Cryin’, you get a glimpse of her MC skills, as her alter ego, Brooklyn, proves she could dominate both genres with ease. 16: ‘Everything’ Many songs stand the test of time thanks to sampling and interpolations across genres. One example that got a second life is The Stylistics’ ‘You Are Everything’, which serves as the backbone for this Share My World single. ‘Everything’ sees Mary at her most blissful as she happily sings about a “love so pure”, upholding the belief that that all great art comes from pain. 15: ‘Just Fine’ There can’t be a wedding, office party, cookout, or family reunion function without hearing this gem from Mary’s 2007 album, Growing Pains. Due to her eccentric dance moves, and aging like a fine wine throughout her career, many R&B fans regard Blige as a famous “auntie”. ‘Just Fine’ could very well be the anthem for all the aunties who just “wanna move”, “wanna have fun”, and “wouldn’t change” their life. Channelling the funk groove of Marvin Gaye and the disco beat of Michael Jackson circa Off The Wall, ‘Just Fine’ is a quintessential throwback party anthem. 14: ‘You Remind Me’ (featuring Greg Nice) When Blige debuted in 1992, with What’s The 411?, she had arrived at the forefront of an R&B renaissance and genre evolution. New Jack Swing was starting to evolve, as the sound became more aligned with hip-hop production. ‘You Remind Me’ is a classic example of this, with Blige delivering stunning melismas and passionate belts over Dave “Jam” Hall’s funky production, earning her the title “Queen Of Hip-Hop Soul”. 13: ‘Share My World’ “Cool”, “suave” and “effortless” are three words that describe the relaxing tone of Blige’s vocals on her third album, Share My World. While her first two albums were hard-hitting, with heavy emotional delivery, Share My World offered a more laidback approach, acknowledging hip-hop soul’s transition into a more electro-fuelled phase of R&B in the new millennium. Part of this tonal shift was also thanks to Blige’s more positive state of mind and her departure from working with Puffy as a producer. The title track of the album, ‘Share My World’ earns its place among the best Mary J Blige songs, as she floats over the glitchy, trip-hop beat. 12: ‘Don’t Go’ Swapping typical hip-hop breakbeats for samples like Guy’s ‘Goodbye Love’ And DeBarge’s ‘Stay With Me’, producers Thompson and Puffy create the perfect backdrop for Blige’s hip-hop soul balladry on ‘Don’t Go’, a My Life classic. It’s one of the more downtempo cuts on the album, but Blige still imbues it with the kind of soulful yearning that was usually reversed for the Marvin Gayes of the world. 11: ‘U + Me (Love Lesson)’ In 2017, Blige asserted her dominance, going toe-to-toe with trap-tinged R&B and pop music. On her 13th studio album, Strength Of A Woman, she bounces back from recent divorce drama with tracks such as ‘Glow Up’, ‘Thick Of It’ and ‘Love Yourself’, which all tap into the trend while maintaining Blige’s personal brand of soul. The album’s staple ‘U + Me (Love Lesson)’ is a sultry break-up anthem. Blige doesn’t regret the relationship, but, rather, feels fortunate to have survived it. “In too deep without imperfection/Not always good, but I stayed on my feet,” she sings, proving once again that she’s the master at moving on. ‘U + Me (Love Lesson)’ would reach №1 on Billboard’s Adult R&B Songs chart to become one of the defining Mary J Blige songs. 10: ‘Not Gon’ Cry’ Resilience has been a consistent theme throughout the best Mary J Blige songs. She has consistently paved the way to redemption for plenty of her listeners, particularly women who have been through relatable situations. Her appearance on 1995’s Waiting To Exhale soundtrack was an essential ingredient for a film centred around four African-American girlfriends who navigate their own stories of love, dating and heartbreak. On this staple, Blige promises, “I’m not gon’ shed no tears,” realising, “‘Cause you’re not worth my tears,” despite having sacrificed “11 years” to unrequited commitment. Peaking at №2 on the Hot 100 in 1996, ‘Not Gon’ Cry’ would appear on her third album, 1997’s Share My World. 9: ‘All That I Can Say’ Sometimes, through all the heartache and pain of her discography, joyous moments may seem rare. ‘All That I Can Say’ is therapeutic, to say the least, finding the singer in one of her most blissful states. Opening the Mary album, it also features the song’s author, Lauryn Hill, providing background vocals. In one of her most earnest performances, Blige calls her partner “heaven-sent”, which is also an apt description of the song itself. 8: ‘I’m Goin’ Down’ The best cover versions not only do the original songs justice, but also add new depths to their meaning. There’s Whitney Houston’s powerhouse cover of Dolly Parton’s ‘I Will Always Love You’, Sinéad O’Connor’s heart-wrenching version of Prince’s ‘Nothing Compares 2 U’, and Mary J Blige’s sultry rendition of Rose Royce’s 1976 classic ‘I’m Going Down’. Fittingly, for the pain expressed throughout her sophomore effort, My Life, ‘I’m Goin’ Down’, produced by Sean “Puffy” Combs and Chucky Thompson, would showcase the resilience in Blige’s earthy soprano voice, becoming one of the most beloved Mary J Blige songs of all time. 7: ‘No More Drama’ Just as its parent album declares, the track ‘No More Drama’ sees Blige navigate unfamiliar territory: contentment. Recalling the heartbreak and the ups and downs she’s navigated through her life, Mary declares “No more drama” in one of her most dramatic performances, over the beat of ‘Nadia’s Theme’ from the daytime soap opera The Young And The Restless, courtesy of the star production duo Jimmy Jam and Terry Lewis. 6: ‘Be Without You’ As one of the finest cuts on The Breakthrough, ‘Be Without You’ stormed R&B radio in 2005 and spent a phenomenal 75 weeks on the charts, earning Blige two Grammy wins. It remains one her most powerful performances, with vocal runs for days before she brings it home towards the end. It also proved she could maintain her leading lady status into the 00s. 5: ‘Family Affair’ For those who didn’t grow up on Blige’s 90s cuts, only knowing her as a balladeer, ‘Family Affair’ was a reminder that she could still get down. In 2001, Blige was heading in a brighter direction and had a healthier outlook on life. Naming her fifth studio album No More Drama, she started a new era that summer by inviting fans to her “dancerie” and reminding them they “don’t need no hateration, holleration”, over Dr Dre’s G-Funk production. The song earned her both a place at the top of the charts (her first №1 hit) and in the history of pop-culture vernacular. 4: ‘Real Love (Remix)’ (with Notorious BIG) The Notorious BIG stepped into the mainstream thanks to his guest appearance on a remix of this What’s The 411? single. ‘Real Love’ had already hit №1 on the R&B chart, taking its place among the best Mary J Blige songs from the jump, but Biggie’s appearance took it to the next level. “Look up in the sky/It’s a bird, it’s a plane/Nope, it’s Mary Jane,” he raps on the verse. This remix also became the template for the R&B/hip-hop collaborations that that Sean “Puffy” Combs and Bad Boy would churn out for the next decade. 3: ‘I’ll Be There for You’/’You’re All I Need to Get By’ (Method Man, featuring Mary J Blige) Blige’s many memorable collaborations have gained her respect and adoration in the worlds of both R&B and hip-hop. In 1995, she partnered with Wu-Tang’s Method Man for one of hip-hop’s most iconic love duets. The song interpolates Marvin Gaye and Tammi Terrell’s Motown classic ‘You’re All I Need to Get By’, with Mary singing the hook and mimicking its melody during her verses. Adding an extra layer to the Grammy-winning partnership is a sample of Notorious BIG repeating “Lie together, cry together/I swear to God I hope we f__kin’ die together.” 2: ‘I Can Love You’ (featuring Lil’ Kim) On this Share My World single, the Queen Of Hip-Hop Soul hooked up with the Queen Bee herself, Lil’ Kim. ‘I Can Love You’ features one of Kim’s best verses over a sample of her own track ‘Queen B__tch’, an infamous cut released by the rapper on her 1996 debut album, Hard Core. It was a unique moment of female solidarity and a piece of hip-hop history. 1: ‘My Life’ A recurring theme throughout the best Mary J Blige songs (and all her discography) is how often she has felt misunderstood by the public, the media and her romantic partners. Underneath all the hurt and pain, Blige has reminded us, on multiple, occasions that she’s just human. On the title track from her landmark 1994 album, My Life, she delivers her most moving performance, singing on the chorus: “If you looked at my life/And see what I see,” over a sample of Roy Ayers’ ‘Everybody Loves The Sunshine’. Like so many R&B singers, Blige started in the church, and with ‘My Life’ she puts her gospel roots on display. What really makes ‘My Life’ a true standard, however, is that it’s the perfect distillation of Blige and her hip-hop soul sound. It’s cathartic, spiritual and a fine reminder of her impressive talent and wide-ranging artistry. The Mary J Blige collection HERStory, Vol.1 is out now. Buy it here. Looking for more? Discover the best 90s R&B songs. Join us on Facebook and follow us on Twitter: @uDiscoverMusic
https://medium.com/udiscover-music/best-mary-j-blige-songs-20-essentials-from-the-queen-of-hip-hop-soul-2bab7c4686cd
['Udiscover Music']
2019-12-09 09:46:12.568000+00:00
['Hip Hop', 'Music', 'Lists', 'Culture', 'Pop Culture']
Data Visualization for Artificial Intelligence, and Vice Versa
Data Visualization for Artificial Intelligence, and Vice Versa by Nicolas Kruchten Data visualization uses algorithms to create images from data so humans can understand and respond to that data more effectively. Artificial intelligence development is the quest for algorithms that can “understand” and respond to data the same was as a human can — or better. It might be tempting to think that the relationship between the two is that to the extent that AI development succeeds, datavis will become irrelevant. After all, will we need a speedometer to visualize how fast a car is going when it’s driving itself? Perhaps in some distant future, it might be the case that we delegate so much to AI systems that we lose the desire to understand the world for ourselves, but we are far from that dystopia today. As it stands, despite the name, AI development is still very much a human endeavour and AI developers make heavy use of data visualization, and on the other hand, AI techniques have the potential to transform how data visualization is done. TensorFlow Graph Visualizer (source) Data Visualization for Artificial Intelligence Artificial intelligence development is quite a bit different from typical software development: the first step — writing software — is the same, but instead of someone using the software you wrote, like in normal software development, the AI software you write then takes some data as input and creates the software that ends up being used. This is referred to as the AI system training or learning, and the end result is usually called a model. This two-step process is key to the success of AI systems in certain domains like computer vision: AI software can create computer vision models better than humans can. On the other hand, the output of the AI development process is often spoken of as a “black box” because it wasn’t created by a human, and can’t easily be explained by or to humans. Data visualization has turned out to be critical to AI development because it can help both AI developers and people concerned about the adoption of AI systems explain and understand these systems. Respected Google researchers Fernanda Viégas and Martin Wattenberg went so far as calling their EuroVis 2017 keynote address Visualization: the Secret Weapon of Machine Learning and Elijah Meeks, a data visualization engineer at Netflix, recently wrote that: “Data visualization of the performance of algorithms for the purpose of identifying anomalies and generating trust is going to be the major growth area in data visualization in the coming years.” The AI development process often begins with data exploration, sometimes also called exploratory data analysis, or EDA, in order to get a sense of what kinds of AI approaches are likely to work for the problem at hand. This has historically largely been done by making charts and other visualizations of a dataset. One particular challenge of AI development is that input datasets are often of very high dimensionality: if they were represented as tables they would have many columns. A number of data visualization techniques have been developed to help understand relationships within high-dimensional datasets, such as parallel coordinate plots, scatterplot matrices, scagnostics and various dimensionality-reduction visualization algorithms such as multidimensional scaling or the popular t-SNE algorithm. The structure of AI software is usually that of a pipeline of steps that feed into each other in complex ways. AI developers find it helpful to be able see and edit visual representations of the pipelines they work with, and specialized visual tools have been developed to help them visualize them, such as the TensorFlow Graph Visualizer system in the popular TensorFlow library, or the Microsoft Azure ML Studio. Once the AI software has learned a model from a dataset, AI developers need to be able to to evaluate how well it performs at its designated task. Often this results in disappointment, leading to a need to explain and understand what the system has learned in order to improve it. In evaluation, receiver operating characteristic (ROC) curves are used to evaluate the results of classification algorithms, and silhouette plots are used to do the same thing for clustering. Data visualization is especially helpful in evaluation because models often exhibit a range of behaviours whose outcome can’t be evaluated at a single point, but rather as a trade-off curve or surface or hyper-surface, which are often understandable only qualitatively via visualization rather than numerically as a score. When it comes to understanding what a system has learned – what is driving the performance or lack thereof – visual tools are under development such as ActiVis for deep neural nets or Clustervision for clustering, to highlight just two efforts published last year. Modern AI research has, of course, expanded beyond just classifying and clustering tabular datasets to also operate on unstructured datasets such as mixtures of text, images, and speech audio. Working with large image datasets naturally lends itself to visual tools, and the recent leaps in image-recognition and labelling software has been accompanied by impressive software that researchers use to understand how their algorithms “see” the world. Techniques developed to visualize how individual units in a deep neural network operate have recently led to very interesting visual art projects such as DeepDream and Neural Style Transfer. An incredible visual exploration of the building blocks of how deep nets “see” has recently been published over at Distill.pub, as has a visualization of how handwriting recognition works. Once an AI system — the AI software and the models it produces — has been developed and performs to the satisfaction of its creators, a final critical hurdle needs to be cleared before it can be used to automate any real-world tasks: human gatekeepers must be convinced that this is a safe and profitable thing to do. In the case of self-driving cars or medical image analysis for example, we need to be able to trust software to make life and death decisions. In a way, this is the same challenge as exists in development: a human needs to understand how a system works and what kinds of results it can produce, however gatekeepers usually have very different backgrounds from developers — they are businesspeople or judges or doctors or non-software engineers. The humans who are involved in approving AI systems for use are often those who currently perform similar tasks, and want to know why an AI system responds to data the way it does, couched in the terms they themselves reason in. This “interpretability” requirement has historically led to the use of less-powerful but more easily-explained, easily-visualized model structures such as linear regressions or decision trees. Recently, however, systems like Rivelo or LIME have been developed to visually explain individual predictions of very complex models (regardless of the model structure) with the explicit goal of helping people become comfortable with and trust the output of AI systems. Data visualization has also been helpful in explaining some of the economic or fairness trade-offs involved in using artificial intelligence instead of the human variety to make various types of decisions. One final area where data visualization is useful to AI development is education. This is also related to understanding how models work, but aimed at different audiences: future AI developers in training or interested laypeople who want to understand the algorithms that have an increasing impact on their lives. Interactive visualizations such as the neural network playground, Distill.pub and others have served as an introduction to AI theory and practice for hundreds of people and this show no sign of slowing. Artificial Intelligence for Data Visualization So far I have provided a number of examples of how data visualization can be useful in artificial intelligence development, but the reverse is also true. AI techniques can be useful in data visualization today and have the potential to be even more so in the years to come. The use of advanced data-processing software, such as AI software, coupled with data visualization is sometimes referred to as Visual Analytics. In the visual analytics paradigm, a human enters into a discourse with a software system about some data, querying it and receiving results back in visual form, so as to accomplish a goal, be it answering a specific question or just getting a feel for what a dataset might contain. Current, slightly clunky examples of natural language approaches to this include Wolfram Alpha and Microsoft PowerBI Natural Language Querying. To the extent that modern AI systems are getting better and better at interpreting human speech however, for example with Apple’s Siri and Amazon’s Alexa, we might expect that this type of conversational visual analytic discourse will be become more natural and powerful over time. For example, AI systems have recently been developed which can generate realistic looking images from textual descriptions. This suggests that while today the actual process of visualizing data is largely a matter of either pointing and clicking or writing out instructions in a programming language, AI might make it possible to visualize data by chatting out loud with a computer, Star Trek style — although it’s far from clear that this is even desirable. Interestingly, this process can also run be backwards: AI systems can generate text or speech from data or graphics, automatically captioning them, and this has been applied to data visualization as well, for example Tableau’s integration with NarrativeScience. This image-to-text approach has also been extended to enable AI systems to start from a sketch or visual specification for a website, and then create that website itself: going from image to code (a structured form of text). AI systems can even dynamically generate new font faces or shoe designs based on examples of what is desired. In terms of applying these techniques to datavis, Bret Victor’s Drawing Dynamic Visualization and Adobe’s Project Lincoln demos show what non-AI sketch-based input systems might look like for visualization. It may well be possible to blend these approaches create an AI system that can take either a freehand sketch of some desired output or some examples of visualizations similar to the desired one, and automatically create the code for visualization pipeline that would generate the target visualization when applied to arbitrary data. If feasible, this would in a sense represent AI systems competing with human business intelligence developers or data visualization designers, much like they already compete with human computer-vision programmers and may one day seriously compete with human translators or radiologists. On a more collaborative note, when a human and machine both participate in driving a process like this, querying and suggesting in turn, the resulting process is referred to as a mixed initiative system. A natural next step beyond an AI system producing visualizations on demand as the result of a human query about data is the notion of an AI system suggesting interesting or useful visual representations of data without a query. This is sometimes called visualization recommendation, and has recently been an active area of data visualization. AI systems have already been used to create powerful and profitable recommendation systems for books, music, movies, clothing and many other products, so there may be reason to believe that AI techniques could apply to visualization recommendation as well. As a thought experiment, one intriguing possible application of a mixed-initiative system that could mash up a number of the techniques listed above would be an automated anomaly detection system with associated visualization recommendation and verbal explanations. Anomaly detection is an area where AI systems alone are unlikely to be particularly useful, at least the ones we know how to build today, because by definition anomalies are strange and new situations for which there is not a lot of training data, so humans need to be involved in responding. One could imagine an AI system continually monitoring a stream of data about a complex system like a data center or a nuclear power plant, and when something new or unusual happens, it would alert a human and automatically recommend a custom visual representation of that specific anomaly, providing a verbal description of what is interesting about it. The human operator could then converse with the AI system to refine their understanding of the situation and take appropriate action. Such a system would contrast with the way systems monitoring is currently done, which involves creating a predefined set of alert conditions which are hard to tune by hand and/or predefined dashboard-style visualizations that humans quickly get bored with and ignore, neither of which often serve to uncover novel anomalies anyway. Conclusions & Further Reading In this article I’ve tried to organize and highlight some of the rich interactions between data visualization and artificial intelligence techniques; simple and complex, existing and speculative. I’ve surely missed some very interesting examples or potential future directions, and in my enthusiasm for what may be possible I’ve surely underplayed significant technical challenges, both of which I’d be glad to hear about. If you’d like to get in touch to brainstorm or let me know about interesting connections between datavis and AI, please reach out! If you are interested in following developments in these fields and their interactions, the following people, publications and conferences are great starting points:
https://medium.com/plotly/data-visualization-for-artificial-intelligence-and-vice-versa-a38869065d88
[]
2018-03-22 17:49:59.503000+00:00
['Machine Learning', 'Artificial Intelligence', 'Data Visualization', 'Visual Analytics']
Let the record show.
John Pavlovitz has been on fire recently — or I should say I just became aware of him and his writing during the campaigns — and it’s great. He’s a pastor, in North Carolina, so he adds diversity to the group I tend to tune into, but he often writes from a place I recognize in myself. On this, the day when we will swear in a wholly unqualified person to the most powerful position in the world, John has declared his strong opposition to Trump. You’ll sympathize with many of the emotions he has and statements he makes. The idea that we want — maybe even need — to tell the world that we are NOT OK with Trump or those who supported him has. a strong pull for me. That is the thinking behind the marches and protests so many of us will participate in this weekend. And while I know I just introduced my plan to deal with life under Trump. I like this idea well enough that I am reiterating Pavlovitz’s proclamations here for my current and future self: I do not believe this man is normal. I do not believe he is emotionally stable. I do not believe he cares about the full, beautiful diversity of America. I do not believe he respects women. I do not believe he is pro-life other than his own. I do not believe the sick and the poor and the hurting matter to him in the slightest. I do not believe he is a man of faith or integrity or nobility. I do not believe his concern is for anything outside his reflection in the mirror. I believe he is a danger to our children. I believe he is a threat to our safety. I believe he is careless with our people. I believe he is reckless with his power. I believe America will be less secure, less diverse, less compassionate, and less decent under his leadership. “And if I prove to be wrong, it will be one of the most joyful errors of my life. I will own these words and if necessary, willingly and gladly admit my misjudgment because it will mean that America is a better and stronger nation, and the world a more peaceful place.” And like John, I’d love to be wrong. Even with all that we have seen thus far, I still hope that Trump decides to serve all the people of his nation rather than just the one he has served his entire life.
https://medium.com/alttext/let-the-record-show-68f2dcd15955
['Ben Edwards']
2017-04-23 04:54:37.593000+00:00
['Personal', 'Trump', 'Society', 'Politics', 'America']
Future Leaders: Adam Braimbridge, Senior Engineer
‘Future Leaders’ is a series of blog posts by the Financial Times in which we interview our team members and ask them how they got into technology, what they are working on and what they want to do in the future. Everyone has a different perspective, story and experience to share. This series will feature colleagues working in our Product & Technology teams. You can also connect with us on Twitter at @lifeatFT. Adam Braimbridge, Code poker and friend to dingoes everywhere. @uxtremist Hi Adam, what is your current role at the FT and what do you spend most of your time doing at work? My job title is ‘Senior Engineer’. I’m in the ETG (Enabling Technologies Group) and I think of what we do as ‘developer experience’ — user experience for developers. There’s an overlap between making life better for developers and helping to stop FT.com & related apps from grinding to a halt, like how the previous version of FT.com (codenamed “Falcon”) did. Does that involve a lot of monitoring? Or just when things go wrong you’re called in as the cavalry? Part of it is the cavalry, or putting out fires, but I think that individual teams already do a fantastic job of that. Being able to help developers do their work easier is really what I’m aiming for. A lot of that is knowledge sharing and good documentation; and even things like “rubber-ducking” (talking out loud to understand a problem) are useful. I’m trying to be a bit of an advocate for culture … a developer advocate, if you will. For example: One thing I find myself doing sometimes is to sort of play this character. I play a version of Adam that’s a little bit foolish and a tiny bit rebellious — and much more confident than I really am. A subset of developers of all levels tend to have this thing called “imposter syndrome”, which can prevent them from contributing their ideas or objections. This is a huge waste of valuable brain power. What I’m doing when I act this role is I’m setting the bar — within reason — to make it more comfortable for all developers to talk openly about whatever they want to talk about. Because no matter what they talk about, it’s not going to be as ridiculous / naive / contentious as whatever Adam just said. Do you manage anyone right now? Yes, I’m a line manager for two wonderful people. They make me look so good! Jennifer and Keran, I couldn’t ask for better reports. They came in as junior developers and they’re both mid-level now. We’re already building experience towards senior level. It’s great because I don’t have to push them; they’re driven, smart and so quick to learn. I really enjoy managing, it’s a rewarding challenge. (L-R: Jennifer, Adam and Keran) So, how did you get into the technology industry? Oh boy, let’s see. When I was about thirteen, my Dad came home with a Commodore 64 and my brother and I played that. You’d power it on — it was plugged into a TV set, which I think was black and white. I remember turning it on and it flickers up and you could write programs in it, very basic ones. And if you wanted to run a game, it would say, “Press play on tape” and you’d literally put an audio tape into this player, hit play and then go do something outside for an hour. And then we’d come back inside and play this cricket game or whatever on the C64. From that early age I was sold. I thought that was the bees knees. In high school I was doing quite well in computer studies, but in Year 11 they couldn’t find any programming teachers. At that time in Perth, there weren’t many high school teachers who knew how to program. I was told there’d be a programming teacher in Year 12, so I persevered, but when I started Year 12 there was still no teacher. That meant I couldn’t do my university entrance exams, so that really stuffed me around. I went to the careers guidance counsellor and filled out a form for an apprenticeship. They gave me a booklet and asked me to pick three preferences. I picked computer technician and electrician but I couldn’t find a third one that was even close to programming. Now, the day before I’d watched a movie called “Death in Brunswick” which is about a chef, so I chose “chef” as my third choice. Sure enough I ended up doing a four-year chef apprenticeship, because of that movie. I stuck it through and finished it, then left Perth to travel around Australia for a couple of years as a travelling chef. DVD Cover for “Death in Brunswick”, [Fair Use], via Wikipedia. Yes, that’s the guy from Jurassic Park. Although it was great for travelling, it wasn’t what I wanted. I went home, switched to cheffing part time and did a Diploma of Interactive Multimedia. I wanted a career in movies or games. To my surprise, I discovered a knack for programming. I had thought it was all maths and algorithms, but really it’s just thinking things through and finding the edge cases. I taught myself a programming language called “Lingo”, which was the language for a popular thing at the time called Macromedia Director. I got good grades, which led to my first part time programming job in a company called “Fun Ed”, making educational games for kids. After finishing the diploma, programming employment opportunities in Perth were scarce. So I built my own business called “Castledale Virtual Tours”, doing digital photography of houses (and oddly enough, luxury yachts). I developed a process for taking 360-degree photos, and wrote a “Flash” programme to download and view them online. Real estate agents would just give me their username and password for their business websites and I would upload the virtual tours for them. Internet Archive snapshot of “cvirtual.com.au”, August 2004. Best viewed at 800 by 600. That was great except I was working by myself, which got very lonely. I could have employed people but I didn’t have the confidence to tell someone else what to do. One of my clients was called “Bam Creative”. They did websites & digital advertising for businesses in Perth. One time, I had a two-day contract to do some “Actionscript” programming (something of a specialty at the time). I remember this very well … it was my first time working in their office and the Syrian Army (or equivalent) hacked their servers! They were running around like headless chickens. The phone was ringing constantly. No one asked me to, but I just started picked up the phone and answering, “Hello, Bam Creative, Adam speaking. Yes, we’re currently undergoing technical difficulties, let me take your details and we’ll get back to you by the end of the day.” The owner was impressed enough with that to offer to buy my business and give me a fulltime job. So I worked as a developer for Bam Creative for a number of years. During that time I had a one-year break in Vancouver, when I didn’t program at all. Arriving in Vancouver, I swapped my Macintosh laptop for a Kawasaki 440 LTD motorbike, borrowed a tent, and went off into the British Columbian forests with no plan but to see how far I’d get. Adam and his Kawasaki 440 LTD in Vancouver, 2005 Were you trying to find yourself? Thinking back on it now …yeah, I was trying to find myself. Did you find yourself? A little bit, but not as much as I wanted to. When I came back to Perth I was complaining to one of my friends about it, who was a nurse, and she said, “Adam, you’ve got depression. Classic depression. Go and talk to a doctor about it”. That was when I was thirty. I went on antidepressants, and they changed my life. By the way, I’m on record at work saying that if anyone wants to talk about depression they can contact me. I’m happy to talk about it with anyone who needs to. I respect confidentiality and I’m good at forgetting stuff, so people feel safe talking to me. “Pandy Warhol makes a friend”, cosmic_unicorn_3000, via Instagram Anyway, in 2010 I left Bam Creative and came to London, wanting to relaunch a career as a UX specialist. I even changed my Twitter handle to @uxtremist. I had three interviews lined up; the third was with a little company called “Assanka”, owned and run by Andrew Betts and Rob Shilston. I was incredibly lucky to get that job. A few years after I joined Assanka, it became acquired by the FT and changed to “FT Labs”. I wanted an excuse to work from the FT home office, assuming that it was inevitable that we’ll all move across eventually, so I volunteered to join “Code Club”, teaching nearby primary school kids how to code. Because the school was near the head office, I worked from there one day a week. Eventually I was there full-time, on the FT Blogs / Alphaville team. “Next FT” was less than a year old. One day I saw a big screen with user-data charts on it and was like, “This is amazing, this is awesome, this is what I’m talking about!”. Matt Chadburn (Technical Director of FT’s Internal Products team) walked by and said “Can I help you?” I said “Yeah, this is really interesting! What’s it all about and how does it work?” Matt responded to my natural curiosity. One thing led to another, which led to a bootcamp with his team in Next FT. On my first day I made a chrome extension called “Mollydobbin”, to help developers add tracking data correctly. I think Matt was mildly impressed with that, because when I asked if I could join the Next FT team full time, I was allowed to. Thanks to Matt Chadburn, Matt Andrews, Rik Still and Michelle Shakes — and many other people — I’ve been very lucky at FT. I owe my success to all the really cool people in the Next FT team. That’s so nice! What is the project you’ve worked on at the FT which you’re most proud of? The thing I’m most proud of by far is mentoring students who’re learning how to code. It’s a long story, but I do a thing called “Big Kids’ Code Club”, where I tutor people from all parts of FT, helping them to develop the skills for working with tools, thinking things through, and figuring stuff out. Basically, how to google better. I like doing that, because personally I learn so much when I’m teaching; it makes me feel helpful; and I get great feedback. I even won an FT Culture Award for it! Flattery charges my battery. 😊 A personal project I’m proud of (and which I plug whenever I can) is an internal FT website called houston.ft.com. “If you have a problem, ask Houston.” The site is the result of me trying to make the most useful tool I can. It’s basically a hub, with an overview of everything, helpful search tools, a collection of useful internal links, and a reference to our publishing pipeline. Now the biggest project I’m proud of is a group of tools that our team informally calls the “Sushi Suite”. The story goes: When Amy Nicholson (who is so amazing by the way) was at the FT, she was instrumental in helping ETG find a new, better way to work. Amy and Sam Parkinson arranged an out-of-office away day to come up with a long term strategy, instead of just being tactical. Because “tactical is not practical!” We decided to focus on improving the way we do code migrations. Instead of doing work for all the teams — which isn’t scalable — we would make tools to make it easier for teams to do their own migrations. We started off thinking about our codebase. We have over three thousand GitHub repositories at the Financial Times. How do we know which ones FT.com is responsible for? You can have a spreadsheet or a list and keep that up to date manually, but no one wants to do that. So we made a tool called “Tako” (which means “octopus” in Japanese — the GitHub mascot is an “octo-cat”). It’s a single source of truth that we can trust to get a list of our repos. We can refine the Tako list using a sophisticated tool called “Ebi”, which means “Prawn”. If you’re a developer it’s worth looking that up. Tako and Ebi make it possible to know exactly which of your repositories need changes. Once you know which repositories to deal with, how do you automate a code change across them all? The answer used to be, you’d check them all out and then you manually copy and paste your code fixes. It takes a couple of days or even weeks to update everything. So as a team, but championed by Bren Brightwell, we made a tool called “Nori”. Nori wraps our tools into a command-line wizard, and was named because in Japanese, “Nori” is an edible seaweed that’s used for wrapping sushi. Logo for nori So: We’ve got our list of repositories and we’re making changes to them all, but how do we know that we’re finished? Have all those changes have been rolled out? For that we’re using GitHub projects. You can manipulate GitHub issues and pull requests and stuff like that with a tool called “Asari” (or, “Clam”). It was a team effort of course, but personally the code I’m most proud of writing is in that tool. Typing asari commands in a computer terminal to manage repositories in GitHub. That’s that — the Sushi Suite. By the way, a nice side-effect of this project is that I’ve been able to do tech presentations on it, for internal FT audiences, for the “London Web Performance” meet up, and at the Guardian office. That’s really cool. What’s the biggest lessons you have learned in recent years? That’s a tricky one. Three TED talks have been utterly transformative for me: Dan Ariely’s “Are we in control of our own decisions?”, Dan Pink’s “The Puzzle of Motivation” and Michael Shermer’s “Why People Believe Weird Things”. Learning recently about “Glue Work” was an awakening. It’s the “less glamorous — and often less-promotable — work that needs to happen to make a team successful.” It’s pervasive and a worthwhile opportunity for improvement. Aside from that, there’s been something brewing in my thoughts over the past few years that ties into just about everything. For some time I’ve known that everything comes down to respect and communication. They’re the most important part of being human. Now, on top of that, I’ve learned is that everything is on a sliding scale. Whenever someone says “Oh, never do this” or “Always do that” or “All things are X” or “All things are Y”, I mentally convert that into a sliding scale where you’ve got nothing on one side and everything on the other. Then I kind of picture where that statement sits on that scale. It’s difficult to explain without concrete examples, but nothing is black and white. I’m very aware that everything’s relative; everything is on a sliding scale. Keen minds spot that “Everything is on a sliding scale” includes the word “everything”, which is a bit hypocritical, but I prefer to call it a paradox. 😁 By the way, this way of thinking sits well with OKRs (Objectives and Key Results), which I’m a volunteer “champion” for. It turns out that Key Results are not “the tasks you’re going to do”. They’re the ways you are going to measure the success of whatever it is you want to achieve (by doing those tasks). In other words, the Key Results are a sliding scale, and the work you do is measured along that scale. So anyway, sliding scale is part one. Part two is: Almost every problem boils down to “signal vs noise”. What I mean is that communication is super important, and when you have problems with communication, it’s more often than not a problem of too much noise. There’s so much noise to deal with that it’s difficult to get a good signal. So when someone says something to you, because you don’t both share the same brain, the signal can get lost in the noise and you may misinterpret it. When I’m faced with a problem or challenge I think “Ok, where’s the signal? Where’s the noise?” Because if you don’t acknowledge that up front, you are off on the wrong foot because you’re dealing with a noisy signal. The most important thing (which I’ll tell anyone who will listen to me), the first thing you should always do, is start with the problem. The trick is, if you understand the problem then you can come up with a better solution. Which reminds me: Coming up with ideas for solutions is really easy, but understanding the fundamental problem is really difficult. As engineers or even as humans, we tend to ‘solutionize’; that is, we jump on the first solution we think of, rather than thinking through the problem and imagining a bunch of different possible solutions. One of my favourite quotes is cited by Steven Pinker: “Problems are inevitable. Problems are solvable. Solutions create new problems.” Yeah, it’s better to fix something so that there’s not as many problems in the long run. That makes sense and is very philosophical. This leads nicely into the final question: What would you like to do in the future Good question. I’m planning on staying with the FT and I want to continue improving the developer experience for everyone. I’m specifically interested in helping to bring the FT up to speed in one area: We’re currently lagging behind the rest of the technology industry in terms of remote working. I want to research what the competition is doing and see how we measure up. I think the hardest part is figuring out how it will all work as a policy. At the moment it’s easier for people to just say “no, we don’t have a policy for working remotely” I would never presume to be responsible for writing an official remote-working policies for FT, but I think I can make it easier to share information and give clarity. I want to make a framework so that different departments in FT can publish compatible remote-working policies. I’m inspired by the recent “Engineering Principles” and “Engineering Checklist” projects. Again — it comes back to signal vs noise and good communication, and everything’s a sliding scale. Obviously it’s ludicrous to say, “Yes, you can work from absolutely anywhere in the world!” but it’s just as ludicrous to say, “No, you can’t work from anywhere except the UK.” I want to slide the scale to the “good” end. Otherwise we’ll keep missing out on recruiting international talent, and losing our developers when they move to different countries. The good news is that we already do remote working, for example we work with the teams in Sofia, Bulgaria. But we can’t work just anywhere. What I mean is, take Cyber Security for example. Some places you can’t enter the country without the government requiring to search your device and install software onto it. So there’s no way we should be allowed to take work devices in — and then of course we can’t do any work. Other examples probably include not being able to work from certain countries for insurance and tax reasons. I don’t know the details, and that’s what I’m trying to find out. Confession time: I do have an ulterior motive. I want to be a guinea pig and work in New York, even just for a week or so, still in the ETG for the London office. I think that could be something that is not ludicrous, but it’s a little bit … novel. It makes business sense in terms of proving that it can be done. As an experiment to see if it can be done, that would be great. Personally I would like to try living in New York for a while anyway. Is there anything new you’d like to explore or learn about? Oh yeah. plenty of things. I’m looking forward to learning a bunch of new stuff in the next project our team’s working on. We’re teaming up with the “Page Kit” team: Maggie Allen and Matt Hinchliffe. Kit means “something that generates” and Page means “web page”. So Page Kit is a project for modernising how we generate our web pages. Also, there’s this new service called “Netlify” who are pushing this idea called the “Jam Stack”. I’d be rather interested in that. It’s something I’d like to spend a bit of time on to see if we can talk about doing procurement and using it in our stack. FT.com Page Kit. You know, for web pages 🤓 Speaking of exploring, it’s been a while since I went out into the wilderness on a motorbike camping adventure. I wonder … *looks into the distance and strokes beard thoughtfully* Ever the explorer! Thanks, Adam! Interviewee: Adam Braimbridge Interviewer: Georgina Murray
https://medium.com/ft-product-technology/future-leaders-adam-braimbridge-senior-engineer-c984c033f1d8
['Ft Product']
2019-08-05 14:49:08.600000+00:00
['Engineering', 'Commodore 64', 'Tech Culture', 'Financial Times', 'Web Development']
An example of the elegance of clear and simple speech in poetry
When I write a poem I try to keep the words as simple and true as I can manage. The temptation to show-off with language is always there, and the temptation to force meaning will make me pause and doubt the words. I know that the music is in the plainsong of language and that meaning is in the precise and simple statement of what we experience and observe. I was in school with Jeff Harrison. Here is one of his poems. I like his poetry because he is able to be skilled and direct. He uses language that you can hold on to. He can be very smart, but he doesn’t let being very smart get in his way. Here’s how he explains it: I tend to dislike deliberate obscurity, for instance, and I tend to gravitate toward poetry that is seriously engaged with both experience and language. A poem: Visitation by Jeffrey Harrison Walking past the open window, she is surprised by the song of the white-throated sparrow and stops to listen. She has been thinking of the dead ones she loves — her father who lived over a century, and her oldest son, suddenly gone at forty-seven — and she can’t help thinking she has called them back, that they are calling her in the voices of these birds passing through Ohio on their spring migration. . . because, after years of summers in upstate New York, the white-throat has become something like the family bird. Her father used to stop whatever he was doing and point out its clear, whistling song. She hears it again: “Poor Sam Peabody Peabody Peabody.” She tries not to think, “Poor Andy,” but she has already thought it, and now she is weeping. But then she hears another, so clear, it’s as if the bird were in the room with her, or in her head, telling her that everything will be all right. She cannot see them from her second-story window — they are hidden in the new leaves of the old maple, or behind the white blossoms of the dogwood — but she stands and listens, knowing they will stay for only a few days before moving on. 2006 You can visit his web site here.
https://medium.com/drmstream/an-example-of-the-elegance-of-clear-and-simple-speech-in-poetry-6b8cfb63b408
['Dan Mccarthy']
2016-11-10 02:27:31.720000+00:00
['Remembering', 'Observing', 'Writing', 'History', 'Art']
Why I Chose to Have a Pimp
To understand why I work in a run-down brothel, we have to go back to when I was 11 years old. My best friend was Anna from down the road. I scored super high on my year-six exams, and I was about to begin dealing with a downward health spiral that still affects me today. During the first week of the summer holidays, my mum took us to the cinema — a big treat. During the movie, I felt sick and began sweating, unable to concentrate. In the car on the way back, my head lolled, and I couldn’t support it anymore. Then, all I felt was heat and pain and nausea and the days began rolling into one another. I was diagnosed with swine flu and was horribly sick all that summer, and it turned out that the dirty cinema lobby was the last time I ever felt real. Sometimes, clinicians refer to myalgic encephalomyelitis (ME) as post-viral fatigue syndrome (PVFS). You have a virus, and you never really recover. It’s like having the flu, but it doesn’t end. It took six years for me to be diagnosed. Nobody thinks “tired” is a legitimate health concern. They diagnosed me with myalgic encephalomyelitis and chronic pain — something I hadn’t even realized I had. I thought everybody felt this awful all the time. ME isn’t just feeling tired. I had two episodes where I woke up and everything from the waist down was on fire, like I had been dipped in acid, almost unable to walk. My hands started shaking at random times. I would sleep for hours when I got home from school, waking up just to eat and do homework. The cognitive symptoms were the slowest to creep up on me: a gradual drop in academic performance, the inability to recall basic facts. Sometimes, people would ask me a simple question and I would stare at them. But my decline was getting more and more rapid. Halfway through year 12 of school, I crashed. I was attending only about seven hours of classes a week. I could barely stay conscious. Doctors couldn’t deny that something was wrong. I had test after test after test. They cycled through autoimmune conditions but found nothing that really fit. Finally, they diagnosed me with ME and chronic pain — something I didn’t even realize I had. I thought everybody felt this awful all the time. I still remember the rheumatologist pressing his fingers into my skin and explaining to me that, normally, people don’t wince at that. The most infuriating part of my illness was that I couldn’t think like I used to, couldn’t string together sentences easily or see patterns or understand things, and it was like I wasn’t myself. In my first year of university, I slept 16 hours a day. I couldn’t find a job that I was well enough for. I didn’t qualify for anything but the lowest possible amount of student finance. I dipped in and out of sex work but hit barriers everywhere I turned. I couldn’t reliably book clients in advance because my health was so unpredictable. Messages would go unanswered in my inbox because I was too ill to reply. I saw clients, but they were few and far between. By second term of my second year, things had taken a positive turn. I was getting better. It can happen with ME patients; symptoms and the progress of the illness is so unpredictable. I was sleeping only 12 or so hours a day. As long as I didn’t push my limits, I could generally manage my life. Even my cognitive symptoms improved. In return for this ability to stay conscious and think clearly, however, my pain increased in severity. Sometimes I would be walking to class and find myself doubled over in pain. I was still poor and still sick, but I was just about well enough to do something about it. A few months into this shaky half-recovery, I found the brothel on the internet and called. I had a shift within days.
https://humanparts.medium.com/why-i-chose-to-have-a-pimp-af7780a2625c
['Lydia Caradonna']
2020-05-14 20:39:53.686000+00:00
['Health', 'Sex Work', 'Feminism', 'Women', 'This Happened To Me']
Terraform Code Quality
Terraform code quality: checking for compliance So far, we talked about the initial steps, with a very static analysis and we got a bit further with TFLint by asking the API for real IDs, checks for reality, etc… So if you’re familiar with Terraform after this analysis step, you have the planning phase. The planning phase basically just creates all your code, a diff, and checks against the cloud provider’s API for what you are supposed to create. If you want to create a new S3 bucket , and you don’t have it, then it is going to make a plan to create it. Your next steps will be to review the plan and then to apply it to create that resource. At this stage, you might want to enforce some compliance with a tool creatively named Terraform compliance. Quickly making a digression here, if you’re not familiar with the cucumber BDD (behavior-driven development) structure: it implies features name, a scenario and conditions : Given “blah blah blah” you are expected to have “this result This is executed in the background by some code, and will then provide you with some results. The BDD system is very useful to code, but in this case, it is used for compliance using natural language. In our case, if we take the previous example, we could have written something like that and it would have been perfectly supported by Terraform compliance against the plan. Given I have AWS S3 Bucket defined Then it must contain server_side_encryption_configuration Let’s be more specific : resource "aws_s3_bucket" "example_from_reddit" { bucket = "my-secret-bucket" acl = "private" In this case, we created this bucket but we didn’t create a tag. We might have a process where it is very important that all our resources have tags. So we can write something very easy like this scenario : Feature: All Resources Scenario: Ensure all resources have tags Given I have resource that supports tags defined Then it must contain tags And its value must not be null Why is that? You can create a tag with a variable, and it might work all the way through the previous steps but not just this one, because the variable went wrong. Maybe it was supposed to be a number and it ended up being null? During the planning phase, there is some computation, and this computation can go wrong. This is why compliance checks are very important now that things are computed and not just statically analyzed. Integrating those checks and their feedback in your CI/CD system (in a pull request) will give you a much better view of what your code is really doing compliance-wise. Terraform compliance is a provider agnostic tool, including your own custom providers. There are a lot of ready to use examples and you really can get started in minutes just by using the examples they serve directly on the documentation. It is obviously security-oriented by all the usual suspects, like KMS etc…. This tool also has cool features like allowing you to enforce naming conventions. Probably you want to enforce naming for various items like your country, your continent, resources… and here prefixes can be self-documented and forbidden resources as well. In the case you need to be PCI DSS compliant, you might just be forbidden to use a list of resources at AWS that are not PCI compliant. This can be done at the compliance level way before asking the IAM user on AWS for your rights to use resources. You can find Terraform compliance at terraform.compliance.com
https://medium.com/faun/terraform-code-quality-66e6468f50f3
[]
2020-06-23 15:53:19.233000+00:00
['Terraform', 'Azure', 'DevOps', 'Infrastructure As Code', 'AWS']
Stoic Social Media Advice
One of the most attractive aspects about the 2000-year old teachings of Stoic philosophy is how relevant they still are today. Let’s face it, pearls of wisdom like “If it is not right do not do it; if it is not true do not say it” and “While we wait for life, life passes” never really go out of fashion. What the likes of Marcus Aurelius and Seneca couldn’t do all those years ago however was dole out social media advice and philosophize on social networks, at least not the online kind that take up so much of our time today. Having said that, the Stoics were acutely aware that humans are naturally social and so some of their thoughts around being social still translate well when applied to our use of platforms like Facebook and Twitter. The Greek philosopher Aristotle, although at odds with some of the Stoics’ beliefs, still got it right in their eyes when he said that man is by nature a social animal. Taking that into account makes being social doubly important to the Stoics due to their ideal of living “in accordance with Nature.” Speaking of the well-known platforms, data from DataReportal shows that, as of July 2020, 3.96 billion people use social media. That’s a lot of people being social. Or at least that’s the assumption. If you’re checking in everywhere you go, if the only photos you post are selfies, or if more than one of your recent statuses end with “rant over!!!”, then perhaps you aren’t being as social as you think. Drawing that conclusion isn’t to be judgemental, it may actually be an observation that could change things for the better, and even increase your own enjoyment of using the social networks. When you hit submit on those posts what’s the first thing you think? There’s a good chance it’s “I hope people like this.” Apparently a like equals a hit of dopamine. It feels good. But what precedes that? A certain amount of anxiety. The potential embarrassment of it not being liked, or worse, ridiculed. Being derided by even one person does not feel good. And that anxiety can come in other forms too. Maybe you rarely post but still spend a lot of time scrolling through other people’s updates. Wow, everyone else’s life is so much better than mine! If that thought comes to mind, just remember this: A fifth of young people admit their online profile bears little resemblance to reality, and that their recollection of past events has been distorted by their own fabrications. So, what would Marcus Aurelius say? Probably this (and indeed he did in his Meditations): “The happiness of those who want to be popular depends on others; the happiness of those who seek pleasure fluctuates with moods outside their control; but the happiness of the wise grows out of their own free acts.” By placing importance on likes and comments, you’re putting your mood in the hands of others, allowing them to drip-feed you dopamine when they see fit. Don’t outsource your self-esteem. Rather than waiting to take, you might feel better if you give more. Marcus Aurelius again: “Men exist for the sake of one another.” To do this you could make small changes to the way you post. Perhaps try one of these the next time you’re about to post a selfie or a rant: Share something inspiring you read recently Make a sincere product recommendation Give advice to someone looking for help on a topic you have real knowledge of List books, podcasts or YouTube channels you have gained insight from Highlight a good cause Tell the story of problem you have faced and how you solved it Even if these things don’t garner as many likes, after posting one of them you can be happy within yourself that it’s likely to be useful to at least one person who sees it. That usefulness doesn’t depend on receiving the “thumbs-up”. Sharing a funny video or meme isn’t a bad thing though! It’ll probably make someone laugh and brighten up their day. In that sense you’ve impacted someone positively. But that’s a fleeting moment and if that’s the basis of all your posts then you might think of varying them a bit. It’s not so much a vision where everyone’s holding hands and fake full-tooth smiling at each other, just the idea that by improving ourselves a little at a time we can inspire that in others. By all means, post a selfie every now and again, just not every day. As Ralph Waldo Emerson put it: “Moderation in all things, especially moderation.” If you already do follow a “useful usage policy” and dislike what you’re seeing from others, then continue to be the change you want to see. And/or just block them. 🙂 To give the final word to Marcus:
https://medium.com/stoicism-philosophy-as-a-way-of-life/stoic-social-media-advice-585600490625
['What Is Stoicism']
2020-09-12 20:19:56.716000+00:00
['Facebook', 'Marcus Aurelius', 'Social Media', 'Stoicism']
Understanding the Entrepreneurial Mindset
Photo by Bruce Mars on Unsplash Questioning the world around, and observing hidden opportunities to provide a solution for a problem constitutes an entrepreneurial mindset. This mindset results in a willingness to move forward and change the surroundings with a revolutionary idea. Questions like “what if” are the daily table talk for such people. Not only do entrepreneurs strive to jump-start a business from scratch, but they are often unaware of a means of stabilized income source. For example, when the four graduate students who thought of challenging Luxottica started Warby Parker, they were not sure if they could convince anyone to buy cheaper eye-wear. And now, they are the largest online eye-wear sellers. Granted the benefits of having such an approach towards life, it is not always a bed of roses. There are many hard-earned battles you have to fight when it comes to building a business from nothing. Entrepreneurial Guilt and How to Deal with It According to research, 30% of businesses fail in their second year. The factors that boost the motivation to start a business are often the same reasons for burnout. More often than not, these reasons include the attitude of those around us and our self-imposed emotional limits. If you decide to Google the term “entrepreneurial guilt,” you will find useful information there without a proper definition for the phenomenon. It makes you think how little this behavior is acknowledged even within the business community. It makes sense to some degree because we often disregard something by negating it. In this case, we often hear people say the terms “being workaholic” or “poor work-life balance” instead of using the term entrepreneurial guilt. Following are some of the classic symptoms of this guilt and their solutions. Not Doing Enough The most common factor that generates entrepreneurial guilt is tricking your subconscious into thinking that you are not doing enough for your business. I maintain search engine optimization (SEO) for my projects, print-on-demand (POD), and e-Commerce. Since I do not take any client-based work anymore, I felt like I might not feel the pressure. However, to my shock, the opposite came out to be true. I usually tackle about 10,000 words for my SEO work daily, and even then, it feels like there is more that I should be doing to stay relevant. The same feeling arises when I take some time off for other things. It feels like taking time for other things might cost me a huge success, even though I know deep down that it is not the case. Photo by Charles Deluvio on Unsplash Learning Curves A classic self-imposed limit is the feeling of not learning a new skill or doing it alone. Since I am doing my own thing, it has enabled me to question my abilities and the extent to which I push them. The root of all such emotionally draining feelings comes down to never giving up. There were some things that I did not know how to do when I started. I thought challenges meant that my business idea was not enough. People Around Us When I started my business, I can clearly remember my parents being ashamed of what I did. They would feel my newfound independence to be distracting from what I should have been doing. My “friends” thought I was committing social suicide by not opting for a regular 9-to-5. I believe people around us have a fair share of our motivation levels. It is the environment around us that sometimes makes us suffer. Solutions Here are some of the solutions to deal with and overcome entrepreneurial guilt: Acknowledge the Guilt The first step to overcoming this guilt is accepting its presence. We need to understand that not addressing such feelings can do more harm than good. Now and then, we need to take a step back and realize that not feeling enough is a perfectly human emotion. Surround Yourself with Supporting People I feel that this is the most important solution tip. In your constant struggle to reach a goal, you need a strong team of like-minded people. Your peers go through similar challenges and can give you the necessary support you need. When I surrounded myself with entrepreneurs, I noticed many of them supported me in my business ideas at various events and meetups. I was able to profit from their experiences, and I am grateful for that. I also met my mentor about four years ago, who supported me mentally and, in my actions, to bring about my idea to fruition. Photo by Priscilla Du Preez on Unsplash Consider Other Financial Strategies Whenever you start a new business or start-up, always question yourself if you can rely on other financial assistance. It is another important tip that I have found valuable. In the start, I was naïve about initial funding, and I was under a lot of financial pressure. I quit my daily job back then to focus full-time on the business, and in retrospect, I feel like that was a mistake. I should have kept my daily job and paid my bills while working on my business. As an entrepreneur, we need to acknowledge that you have to spend a lot of taxes and bills, and you always need a large amount of initial investment to help you with it. Understand the guilt As an entrepreneur, I have learned to understand that these feelings will not completely go away. Sometimes we feel guilty for doing overwork, and sometimes we feel horrible for not putting enough, even though, deep down, we are aware that we have set our best. I always use a strategy to ask myself a question: “What do I gain from feeling guilty?”. I write to jot it down on paper for future reference. Focus on yourself Since I started facing guilt myself, I resorted to proper research to determine what the professionals recommend. I was happy to see that I was internally synchronous with what I found out. To be a successful entrepreneur who uses this mindset to their advantage for maximum output, I need to take care of myself. This care also meant in the form of time for me. Before taking time for anyone else, it is vital to acknowledge your needs. Never Compare I feel like this is an often-neglected tip to overcome entrepreneurial guilt. To survive the business industry, we need to realize that each business and business owner differs by industry, geography, leadership, company culture, skills, etc. I have learned to keep an open eye for observations and learning but not obsess over the other party. An envious attitude forces you to change every essential practice in favor of what is popular in the market, which can be disastrous. Not every new approach is tailored to your business. Work and Life Balance For the two years that I have tried maintaining a steady business, I paid no attention to the work-life balance. I worked anywhere between 12 -16 hours every day, and in my case, it works perfectly. A few days ago, I read the newspaper, and I read a new piece that made me smile. I read that most Austrians are at risk of burning out because they work longer hours. Contrary to the popular belief of pacing yourself in life, I found that I would be working more than 16 hours sometimes and feeling satisfied at the end of the day. If you have found a job that you like and feel passionate about, you will find a way to be up to the task every day. Since there is no perfect formula for work-life balance and it varies from individual to individual, I stretched out to my entrepreneur peers to see how they handle this complex problem. Some colleagues told me that in addition to passion, you need to have the physical and mental capability of devoting your time and space to be with the people you value the most. Convert your hobby into work, and work along with your best friends. Others told me that the term “work-life balance” paints quite a sad picture and should be “work-life fit” to make it seem like a compromise. I also learned that keeping a time management journal helps a lot. As an entrepreneur, a good chunk of your daily schedule is predetermined, and it is easy to predict what the end of the week is. This habit also gives a sense of control over things, leading to less guilt and eventual entrepreneurial burnout. According to Harvard Business Review, burnout is dissatisfaction at the workplace, displayed as “absenteeism, inefficient decision making and turnover.” The research also lists that entrepreneurs are at moderate risk of getting burned out (about 25%). Still, a high number of entrepreneurs reported passion for their projects. Specifically, those who agreed that their passions go beyond money, i.e., they were doing what they liked and enjoying it. Researches like this fascinate me because they tell me more about how people like me are tackling common issues, and I feel rightly-adjusted in my work sphere.
https://medium.com/illumination/understanding-the-entrepreneurial-mindset-3066b8f6778d
['Changwon C.']
2020-12-03 04:02:33.946000+00:00
['Starting A Business', 'Work Life Balance', 'Business', 'Entrepreneurship', 'Self Improvement']
JavaScript ES2020 Features With Simple Examples
Introduction ES2020 is the version of ECMAScript corresponding to the year 2020. This version doesn’t include as many new features as those that appeared in ES6 (2015). However, some useful features have been incorporated. This article introduces the features provided by ES2020 in easy code examples. In this way, you can quickly understand the new features without the need for a complex explanation. Of course, it’s necessary to have a basic knowledge of JavaScript to fully understand the best ones introduced. The new JavaScript features in ES2020 are: ➡️ String.prototype.matchAll ➡️ import() ➡️ BigInt ➡️ Promise.allSettled ➡️ globalThis ➡️ for-in mechanics ➡️ Optional Chaining ➡️ Nullish coalescing Operator
https://medium.com/better-programming/javascript-es2020-features-with-simple-examples-d301dbef2c37
['Carlos Caballero']
2020-01-28 13:26:05.183000+00:00
['Es2020', 'Programming', 'Front End Development', 'JavaScript', 'Typescript']
The Girlboss Is Over. It never served anyone in the first…
As the collective internet recently descended on cancelling the term #Girlboss, my friends and I who for so long fit the definition, sighed in relief. The best part is no longer having the narrative to buy into in the first place. A slew of prominent female founders were ousted recently in a wave of both Covid-19 related business failings, and as reports of toxic and racist workplaces arose. As self-proclaimed #Girlbosses they ticked the gender diversity box, and sought praise for that alone. They got it. Until recently, I was the same. I grew up watching The Devil Wears Prada, hell bent on pursuing fashion journalism to achieve the Sex and the City dream. Fresh from graduation with a public relations degree and facing the dissolution of steady media jobs post-recession, I took a chance on myself. Enter Sophia Amoruso. I read #Girlboss, I stumbled across Gary Vaynerchuck’s ‘hustle culture’ content and set off under the millennial pink umbrella of mid-2010’s feminism. I was a #WomanInTech, a #Girlboss, a #FemaleFounder, a brand. I bought Glossier, wore Outdoor Voices, and went to the Girlboss Rally. I was a member of The Wing. Being branded a #Girlboss was cool. A compliment. Just being female earned a diversity tick. I wrote press releases for clients that highlighted the percentage of women present in their workplace, even though I knew behind the scenes they were not supported. Just our presence seemed good enough. The only ratio anybody worried about increasing was the female percentage. Not race, ability, sexual orientation or class. In my tech career I worked with a range of people, but only directly hired other white, millennial females for internships. I saw myself in them, and wanted to pull them along in my footsteps. But that’s how the problem perpetuates itself. It’s how white men created their own self perpetuating boys club. The #Girlbosses were simply doing that all over again. It was never about true change. We lived in a self indulgent bubble that didn’t even serve us, it just took years to realise it. Because if the system doesn’t exist for everyone, it serves no one. “(The #Girlboss) saw gender inequity everywhere she looked; this gave her something to wage war against. Racial inequity was never really on her radar. That was someone else’s problem to solve.” — Leigh Stein, “The End of the Girlboss Is Here” We thought that money and power in the hands of the #Girlboss was a good thing. Yet, these new resources often just improved the individual, not the community. The #Girlboss wasn’t holding the door open for those behind her. The myth that the #Girlboss was improving the workplace is dangerous. We didn’t change the system, we just showed up and tried to emulate it. And by thriving within it, we failed. We never questioned how to change the system or who it was even designed for. We just tried to show up as the girl version of the bosses we wanted to be. We knew how to be sexy but not overtly, seem powerful — but also carefree. Be cool, relatable, funny, but not enough to be deemed unprofessional. I know personally, in order to keep my seat at the table I ignored every sly comment, every jab at my gender or sexuality. Every micro-aggression. I ignored them all. I failed. Then, 2020 arrived. A string of golden era #Girlbosses were outed for racism, harassment, and toxic work environments. Among them, founders of The Wing, which fell astonishingly quickly from grace. The Wing is probably the ‘best’ example of the problematic #Girlboss. As stories swirled earlier in the year about treatment of workers, The Wing kept strong on it’s facade of female empowerment. The implications of Covid-19 finally shuttered the social club, and stories from women of color working as space staff were finally heard. Their treatment behind the scenes was the antithesis of what The Wing preached publicly. In former member Lindsay Daly’s video essay about The Wing, she documents the mostly positive physical and social experience once found there for members. The Wing had seemingly good intentions, but it failed. Daly notes how ultimately feminism and capitalism struggle to intersect as The Wing prioritised the growth of their business over putting into practice the ideals the company was founded on. “Feminism can be ugly. And dirty. And it doesn’t always look good on Instagram.” — Lindsay Daly So, what’s next? Of course, female founders are here to stay. However now they can exist without labels, and re-write the rules to serve everyone. Not just themselves. Covid-19 and Black Lives Matter have accelerated the age of accountability, and there is no more hiding behind Instagram-ready feminism. The new wave of female founders are not just going to make money, they’re going to make a difference.
https://medium.com/better-marketing/girlbosses-agree-the-narrative-should-die-164769be02db
['Jess Thoms']
2020-11-19 12:54:56.174000+00:00
['Women', 'Startup', 'Business', 'Culture', 'Female Founders']